what is the function of the pointer on a micr

what is the function of the pointer on a micr

what is the function of the pointer on a micr
what is the function of the pointer on a micr

A mouse cursor, also known as a mouse arrow, or mouse pointer, is a graphical image used to activate or control certain elements in a graphical user interface. More plainly, it indicates where your mouse should perform its next action, such as opening a program or dragging a file to another location. The mouse pointer follows the path of the user’s hand as they move their mouse. The graphic shows an example of a mouse cursor.

See our mouse page for a full explanation of a mouse, types of mice, and other related information.

In the animated next to this paragraph, you’ll see an example of a mouse cursor moving around the screen. By default, it looks like a pointed arrow. When positioned over selectable text, it appears as an I-beam cursor. When hovering over a link, it appears as a pointing hand.

Other examples of mouse pointers not pictured include the two-headed arrow, four-headed arrow, and the hourglass.

It’s named a mouse pointer because you’re using a computer mouse to move around a pointer showing you where it would click.what is the function of the pointer on a micr

Cursor, Double-headed arrow, Finger, Mouse, Mouse terms, Operating system terms, Pointer, Pointing device

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

Feedback will be sent to Microsoft: By pressing the submit button, your feedback will be used to improve Microsoft products and services. Privacy policy.

Thank you.


what is the function of the pointer on a micr

This design guide was created for Windows 7 and has not been updated for newer versions of Windows. Much of the guidance still applies in principle, but the presentation and examples do not reflect our current design guidance.

The mouse is the primary input device used to interact with objects in Windows. Mouse functionality can also encompass other pointing devices, such as trackballs, touchpads and pointing sticks built into notebook computers, pens used with Windows Tablet and Touch Technology, and, on computers with touchscreens, even a user’s finger.


Guidelines related to accessibility, pen, and touch are presented in separate articles.

Physically moving the mouse moves the graphic pointer (also referred to as the cursor) on the screen. The pointer has a variety of shapes to indicate its current behavior.

Typical mouse pointers

Mouse devices often have a primary button (usually the left button), a secondary button (usually the right), and a mouse wheel between the two. By positioning the pointer and clicking the primary and secondary buttons on the mouse, users can select objects and perform actions on them. For most interactions, pressing a mouse button while the cursor is over a target indicates the selected target, and releasing the button performs any action associated with the target.

All pointers, except the busy pointer, have a single pixel hot spot that defines the exact screen location of the mouse. The hot spot determines which object is affected by mouse actions. Objects define a hot zone, which is the area where the hot spot is considered to be over the object. Typically, the hot zone coincides with the borders of an object, but it may be larger to make user’s intent easier to perform.

The caret is the flashing vertical bar that is displayed when the user is typing into a text box or other text editor. The caret is independent of the pointer (by default, Windows hides the pointer while the user is typing).

The caret

The mouse has been a successful input device because it is easy to use for the typical human hand. Pointer-based interaction has been successful because it is intuitive and allows for a rich variety of experiences.

Well-designed user interface (UI) objects are said to have affordance, which are visual and behavioral properties of an object that suggest how it is used. The pointer acts as a proxy for the hand, allowing users to interact with screen objects much like they would with physical objects. We humans have an innate understanding of how the human hand works, so if something looks like it can be pushed, we try to push it; if it looks like it can be grabbed, we try to grab it. Consequently, users can figure out how to use objects with strong affordance just by looking at them and trying them.

Buttons and sliders have strong affordance

By contrast, objects with poor affordance are harder to figure out. Such objects often require a label or instruction to explain them.

link text and icons have poor affordance

Right-clicking, double-clicking, and clicking with Shift or Ctrl key modifiers are three mouse interactions that aren’t intuitive, because they have no real world counterparts. Unlike keyboard shortcuts and access keys, these mouse interactions usually aren’t documented anywhere in the UI. This suggests that right-click, double-click, and keyboard modifiers shouldn’t be required to perform basic tasks, especially by novice users. It also suggests that these advanced interactions must have consistent, predictable behavior to be used effectively.

Double-clicking is used so extensively on the Windows desktop that it may not seem like an advanced interaction. For example, opening folders, programs, or documents in the file pane of Windows Explorer is performed by double-clicking. Opening a shortcut on the Windows desktop also uses double-clicking. By contrast, opening folders or programs in the Start menu requires a single click.

Selectable objects use single-click to perform selection, so they require a double-click to open, whereas non-selectable objects require only a single click to open. This distinction isn’t understood by many users (clicking a program icon is clicking a program icon, right?) and as a result, some users just keep clicking on icons until they get what they want.

Interacting with objects directly is referred to as direct manipulation. Pointing, clicking, selecting, moving, resizing, splitting, scrolling, panning, and zooming are common direct manipulations. By contrast, interacting with an object through its properties window or other dialog box could be described as indirect manipulation.

However, where there is direct manipulation, there can be accidental manipulation and therefore the need for forgiveness. Forgiveness is the ability to reverse or correct an undesired action easily. You make direct manipulations forgiving by providing undo, giving good visual feedback, and allowing users to correct mistakes easily. Associated with forgiveness is preventing undesired actions from happening in the first place, which you can do by using constrained controls and confirmations for risky actions or commands that have unintended consequences.

The standard mouse interactions depend on a variety of factors, including the mouse key clicked, the number of times it is clicked, its position during the clicks, and whether any keyboard modifiers were pressed. Here is a summary of how these factors usually affect interaction:

The following table describes common mouse interactions and effects.

The following table describes common pointer shapes and usages.

The following table describes common mouse interactions.

The following table shows pointers that users see when performing an action that takes longer than a couple of seconds to complete.

Text and graphics links use a hand or “link select” pointer (a hand with the index finger pointing ) because of their weak affordance. While links may have other visual clues to indicate that they are links (such as underlines and special placement), displaying the hand pointer on hover is the definitive indication of a link.

To avoid confusion, it is imperative not to use the hand pointer for other purposes. For example, command buttons already have a strong affordance, so they don’t need a hand pointer. The hand pointer must mean “this target is a link” and nothing else.

Windows supports the creation of custom pointers. For more details see, Setting the Cursor Image and User Input: Extended Example.

Many applications provide a palette of controls with custom pointers to support application functionality.

Microsoft Paint includes a palette of different functions, each with a unique pointerwhat is the function of the pointer on a micr

Fitts’ Law is a well known principle in graphical user interface design ergonomics that essentially states:

Thus, large targets are good. Be sure to make the entire target area clickable.

You can dynamically change the size of a target when pointing to make it easier to acquire.

A target becomes larger when the user is pointing to make it easier to acquire

And close targets are also good. Locate clickable items close to where they are most likely going to be used. In the following image, the color palette is too far away from the tool selector.

The color palette is too far from where it is likely to be used

Consider the fact that the user’s current pointer location is as close as a target can be, making it trivial to acquire. Thus, context menus take full advantage of Fitts’ law, as do the mini toolbars used by Microsoft Office.

The current pointer location is always the easiest to acquire

Also, consider alternative input devices when determining object sizes. For example, the minimum target size recommended for touch is 23×23 pixels (13×13 DLUs).

Not all Windows environments have a mouse. For example, kiosks rarely have a mouse and usually have a touchscreen instead. This means that users can perform simple interactions such as left-clicking and perhaps dragging-and-dropping. However, they can’t hover, right-click, or double-click. This situation is easy to design for because these limitations are usually known in advance.

Using a mouse requires fine motor skills, and as a result, not all users can use a mouse. To make your software accessible to the broadest audience, make sure all interactions for which fine motor skills aren’t essential can be performed using the keyboard instead.

For more information and guidelines, see Accessibility.

If you do only four things…

The following table summarizes the mouse button interactions that apply in most cases:

Make click targets at least 16×16 pixels so that they can be easily clicked by any input device. For touch, the recommended minimum control size is 23×23 pixels (13×13 DLUs). Consider dynamically changing the size of small targets when the user is pointing to make them easier to acquire.

In this example, the spin control buttons are too small to be used effectively with touch or a pen.

Make splitters at least five pixels wide so that they can be easily clicked by any input device. Consider dynamically changing the size of small targets when the user is pointing to make them easier to acquire.

In this example, the splitter in the Windows Explorer navigation pane is too narrow to be used effectively with a mouse or pen.

Provide users a margin of error spatially. Allow for some mouse movement (for example, three pixels) when users release a mouse button. Users sometimes move the mouse slightly as they release the mouse button, so the mouse position just before button release better reflects the user’s intention than the position just after.

Provide users a margin of error temporally. Use the system double-click speed to distinguish between single and double clicks.

Have clicks take effect on mouse button up. Allow users to abandon mouse actions by removing the mouse from valid targets before releasing the mouse button. For most mouse interactions, pressing a mouse button only indicates the selected target and releasing the button activates the action. Auto-repeat functions (such as pressing a scroll arrow to continuously scroll) are an exception.

Capture the mouse for selecting, moving, resizing, splitting, and dragging.

Use the Esc key to let users abandon compound mouse interactions such as moving, resizing, splitting, and dragging.

If an object doesn’t support double clicks but users are likely to assume it does, interpret a “double click” as one single click. Assume the user intended a single action instead of two.

Because users are likely to assume that taskbar buttons support double clicks, a “double click” should be handled as a single click.

Ignore redundant mouse clicks while your program is inactive. For example, if the user clicks a button 10 times while a program is inactive, interpret that as a single click.

Don’t use double drags or chords. A double drag is a drag action commenced with a double-click, and a chord is when multiple mouse buttons are pressed simultaneously. These interactions aren’t standard, aren’t discoverable, are difficult to perform, and are most likely performed accidentally.

Don’t use Alt as a modifier for mouse interactions. The Alt key is reserved for toolbar access and access keys.

Don’t use Shift+Ctrl as a modifier for mouse interactions. Doing so would be too difficult to use.

Make hover redundant. To make your program touchable, take full advantage of hover but only in ways that are not required to perform an action. This usually means that an action can also be performed by clicking, but not necessarily in exactly the same way. Hover isn’t supported by most touch technologies, so users with such touchscreens can’t perform any tasks that require hovering.

Internet Explorer supports reader mode, which features the scroll-origin icon

The activity pointers in Windows are the busy pointer () and the working in background pointer ().

Don’t display the caret until the text input window or control has input focus. The caret suggests input focus to users, but a window or control can display the caret without input focus. Of course, don’t steal input focus so that an out-of-context dialog box can display the caret.

The Windows Credential Manager is displayed out of context with the caret but without input focus. As a result, users end up typing their password in unexpected places.

Place the caret where users are most likely to type first. Usually this is either the last place the user was typing or at the end of the text.

For more information and guidelines, see Accessibility.

When referring to the mouse:

When referring to mouse pointers:


The mouse pointer changes shape in Microsoft Excel 2013 and Excel 2010 depending upon the context.

For more information see mouse-pointers

Used for selecting cells

The I-beam – indicates that you may type text in this area.

 The fill handle – used for copying formula or extending a data series.

what is the function of the pointer on a micr

Used to select a whole row/column when positioned on the row number or column letter.

Appears at the border of the column letters. Drag to widen or narrow the width of a column.

Appears at the border between the row numbers. Drag to increase or decrease the height of a row.

Please suggest an improvement(login needed, link opens in new window)

Your views are welcome and will help other readers of this page.

This is question number 1144, which appears in the following categories:

Created by Chris Limb on 25 January 2005 and last updated by Alexander Butler on 17 July 2018

Copyright © 2022, University of Sussex

In the following, Matob will explain what the mouse functions in depth so it is hoped that it will provide a deep understanding for you.

A computer mouse is a handheld hardware input device that controls the cursor in a GUI and can move and select text, icons, files, and folders.

For desktop computers, the mouse is placed on a flat surface such as a mouse pad or table and placed in front of your computer. The image on the right is an example of a desktop computer mouse with two buttons and a wheel.

The mouse was originally known as the XY Position Indicator for Display Systems and was created by Douglas Engelbart in 1963 while working at Xerox PARC. However, due to the lack of success in the Alto, the first mouse application in widespread use was with the Apple Lisa computer. Today, this pointing device is in almost every computer.

Below is a list of each computer mouse function that helps users use their computer and gives you an overview of all the things a mouse is capable of.what is the function of the pointer on a micr

For example, many mice have two side buttons on the thumb of the mouse, the button closest to the palm of the hand can be programmed to return to a web page.

Hopefully our explanation of what the mouse functions will be able to help you get to know and explore more about the topics that we discuss in this article.

7 Most Popular IoT Technologies in 2022

History of the Development of Computers and Operating Systems

What is UPS? Definition, Functions and How It Works

(adsbygoogle = window.adsbygoogle || []).push({});

(adsbygoogle = window.adsbygoogle || []).push({});

Matobnews is online publication focused on technology. We provides The latest technology news and reviews, covering computing, home entertainment systems, gadgets and more.

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.

In computer user interfaces, a cursor is an indicator used to show the current position for user interaction on a computer monitor or other display device that will respond to input from a text input or pointing device. The mouse cursor is also called a pointer,[1] owing to its resemblance in usage to a pointing stick.

Cursor is Latin for ‘runner’. A cursor is a name given to the transparent slide engraved with a hairline used to mark a point on a slide rule. The term was then transferred to computers through analogy.

On 14 November 1963, while attending a conference on computer graphics in Reno, Nevada, Douglas Engelbart of Augmentation Research Center (ARC) first expressed his thoughts to pursue his objective of developing both hardware and software computer technology to “augment” human intelligence by pondering how to adapt the underlying principles of the planimeter to inputting X- and Y-coordinate data, and envisioned something like the cursor of a mouse he initially called a “bug”, which, in a “3-point” form, could have a “drop point and 2 orthogonal wheels”.[2] He wrote that the “bug” would be “easier” and “more natural” to use, and unlike a stylus, it would stay still when let go, which meant it would be “much better for coordination with the keyboard.”[2]

According to Roger Bates, a young hardware designer at ARC under Bill English, the cursor on the screen was for some unknown reason also referred to as “CAT” at the time, which led to calling the new pointing device a “mouse” as well.[3][4]
what is the function of the pointer on a micr

In most command-line interfaces or text editors, the text cursor, also known as a caret,[5] is an underscore, a solid rectangle, or a vertical line, which may be flashing or steady, indicating where text will be placed when entered (the insertion point). In text mode displays, it was not possible to show a vertical bar between characters to show where the new text would be inserted, so an underscore or block cursor was used instead. In situations where a block was used, the block was usually created by inverting the pixels of the character using the boolean math exclusive or function.[6] On text editors and word processors of modern design on bitmapped displays, the vertical bar is typically used instead.

In a typical text editing application, the cursor can be moved by pressing various keys. These include the four arrow keys, the Page Up and Page Down keys, the Home key, the End key, and various key combinations involving a modifier key such as the Control key. The position of the cursor also may be changed by moving the mouse pointer to a different location in the document and clicking.

The blinking of the text cursor is usually temporarily suspended when it is being moved; otherwise, the cursor may change position when it is not visible, making its location difficult to follow.

The concept of a blinking cursor can be attributed to Charles Kiesling Sr. via US Patent 3531796,[7][8] filed in August 1967.[9]

Some interfaces use an underscore or thin vertical bar to indicate that the user is in insert mode, a mode where text will be inserted in the middle of the existing text, and a larger block to indicate that the user is in overtype mode, where inserted text will overwrite existing text. In this way, a block cursor may be seen as a piece of selected text one character wide, since typing will replace the text “in” the cursor with the new text.

A vertical line text cursor with a small left-pointing or right-pointing appendage is for indicating the direction of text flow on systems that support bi-directional text, and is thus usually known among programmers as a ‘bidi cursor’. In some cases, the cursor may split into two parts, each indicating where left-to-right and right-to-left text would be inserted.[10]

In computing, a pointer or mouse cursor (as part of a personal computer WIMP style of interaction)[11][12][13] is a symbol or graphical image on the computer monitor or other display device that echoes movements of the pointing device, commonly a mouse, touchpad, or stylus pen. It signals the point where actions of the user take place. It can be used in text-based or graphical user interfaces to select and move other elements. It is distinct from the cursor, which responds to keyboard input. The cursor may also be repositioned using the pointer.

The pointer commonly appears as an angled arrow (angled because historically that improved appearance on low-resolution screens[14]), but it can vary within different programs or operating systems. The use of a pointer is employed when the input method, or pointing device, is a device that can move fluidly across a screen and select or highlight objects on the screen. In GUIs where the input method relies on hard keys, such as the five-way key on many mobile phones, there is no pointer employed, and instead, the GUI relies on a clear focus state.

The pointer or mouse cursor echoes movements of the pointing device, commonly a mouse, touchpad or trackball.
This kind of cursor is used to manipulate elements of graphical user interfaces such as menus, buttons, scrollbars or any other widget. It may be called a “mouse pointer” because the mouse is the dominant type of pointing device used with desktop computers.

The pointer hotspot is the active pixel of the pointer, used to target a click or drag. The hotspot is normally along the pointer edges or in its center, though it may reside at any location in the pointer.[15][16]

In many GUIs, moving the pointer around the screen may reveal other screen hotspots as the pointer changes shape depending on the circumstances. For example:

The I-beam pointer (also called the I-cursor) is a cursor shaped like a serifed capital letter “I”. The purpose of this cursor is to indicate that the text beneath the cursor can be highlighted and sometimes inserted or changed.[19]

.mw-parser-output .vanchor>:target~.vanchor-text{background-color:#b1d2ff}Pointer trails can be used to enhance its visibility during movement. Pointer trails are a feature of GUI operating systems to enhance the visibility of the pointer. Although disabled by default, pointer trails have been an option in every version of Microsoft Windows since Windows 3.1x.

When pointer trails are active and the mouse or stylus is moved, the system waits a moment before removing the pointer image from the old location on the screen. A copy of the pointer persists at every point that the pointer has visited at that moment, resulting in a snake-like trail of pointer icons that follow the actual pointer. When the user stops moving the mouse or removes the stylus from the screen, the trails disappear and the pointer returns to normal.

Pointer trails have been provided as a feature mainly for users with poor vision and for screens where low visibility may become an issue, such as LCD screens in bright sunlight.

In Windows, pointer trails may be enabled in the Control Panel, usually under the Mouse applet.

Introduced with Windows NT, an animated pointer was a small looping animation that was played at the location of the pointer.[20] This is used, for example, to provide a visual cue that the computer is busy with a task.[21] After their introduction, many animated pointers became available for download from third party suppliers. Unfortunately, animated pointers are not without their problems. In addition to imposing a small additional load on the CPU, the animated pointer routines did introduce a security vulnerability. A client-side exploit known as the Windows Animated Cursor Remote Code Execution Vulnerability used a buffer overflow vulnerability to load malicious code via the animated cursor load routine of Windows.[22]

A pointer editor is software for creating and editing static or animated mouse pointers. Pointer editors usually support both static and animated mouse cursors, but there are exceptions. An animated cursor is a sequence of static cursors representing individual frames of an animation. A pointer editor should be able to:

Pointer editors are occasionally combined with icon editors because computer icons and cursors share similar properties. Both contain small raster images and the file format used to store icons and static cursors in Microsoft Windows is similar.

Despite the similarities, pointer editors differ from icon editors in several ways. While icons contain multiple images with different sizes and color depths, static cursors (for Windows) only contain a single image. Pointer editors must provide the means to set the hot spot. Animated pointer editors additionally must be able to handle animations.

The idea of a cursor being used as a marker or insertion point for new data or transformations, such as rotation, can be extended to a 3D modeling environment. Blender, for instance, uses a 3D cursor to determine where operations such as placing meshes are to take place in the 3D viewport.[23]

ATSUI Programming Guide: Caret HandlingDownloads-icon

“Input Technologies and Techniques”Downloads-icon

A pointing device is a human interface device that allows a user to input spatial (i.e., continuous and multi-dimensional) data to a computer. CAD systems and graphical user interfaces (GUI) allow the user to control and provide data to the computer using physical gestures by moving a hand-held mouse or similar device across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the pointer (or cursor) and other visual changes. Common gestures are point and click and drag and drop.

While the most common pointing device by far is the mouse, many more devices have been developed. However, the term mouse is commonly used as a metaphor for devices that move the cursor.

Fitts’s law can be used to predict the speed with which users can use a pointing device.

To classify several pointing devices, a certain number of features can be considered. For example, the device’s movement, controlling, positioning or resistance. The following points should provide an overview of the different classifications.[1]

In case of a direct-input pointing device, the on-screen pointer is at the same physical position as the pointing device (e.g., finger on a touch screen, stylus on a tablet computer). An indirect-input pointing device is not at the same physical position as the pointer but translates its movement onto the screen (e.g., computer mouse, joystick, stylus on a graphics tablet).
what is the function of the pointer on a micr

An absolute-movement input device (e.g., stylus, finger on touch screen) provides a consistent mapping between a point in the input space (location/state of the input device) and a point in the output space (position of pointer on screen).
A relative-movement input device (e.g., mouse, joystick) maps displacement in the input space to displacement in the output state. It therefore controls the relative position of the cursor compared to its initial position.

An isotonic pointing device is movable and measures its displacement (mouse, pen, human arm) whereas an isometric device is fixed and measures the force which acts on it (trackpoint, force-sensing touch screen).
An elastic device increases its force resistance with displacement (joystick).

A position-control input device (e.g., mouse, finger on touch screen) directly changes the absolute or relative position of the on-screen pointer.
A rate-control input device (e.g., trackpoint, joystick) changes the speed and direction of the movement of the on-screen pointer.

Another classification is the differentiation between whether the device is physically translated or rotated.

Different pointing devices have different degrees of freedom (DOF). A computer mouse has two degrees of freedom, namely its movement on the x- and y-axis. However the Wiimote has 6 degrees of freedom: x-, y- and z-axis for movement as well as for rotation.

As mentioned later in this article, pointing devices have different possible states. Examples for these states are out of range, tracking or dragging.


The following table shows a classification of pointing devices by their number of dimensions (columns) and which property is sensed (rows) introduced by Bill Buxton. The sub-rows distinguish between mechanical intermediary (i.e. stylus) (M) and touch-sensitive (T). It is rooted in the human motor/sensory system. Continuous manual input devices are categorized. Sub-columns distinguish devices that use comparable motor control for their operation. The table is based on the original graphic of Bill Buxton’s work on “Taxonomies of Input”.[2]

This model describes different states that a pointing device can assume. The three common states as described by Buxton are out of range, tracking and dragging. Not every pointing device can switch to all states.[3]

Fitts’s law (often cited as Fitts’ law) is a predictive model of human movement primarily used in human–computer interaction and ergonomics. This scientific law predicts that the time required to rapidly move to a target area is a function of the ratio between the distance to the target and the width of the target.[4] Fitts’s law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device.
In other words, this means for example, that the user needs more time to click on a small button which is distant to the cursor, than he needs to click a large button near the cursor. Thereby it is generally possible to predict the speed which is needed for a selective movement to a certain target.

The common metric to calculate the average time to complete the movement is the following:


This results in the interpretation that, as mentioned before, large and close targets can be reached faster than little, distant targets.

As mentioned above, the size and distance of an object influence its selection. Additionally this effects the user experience. Therefore, it is important, that Fitts’ Law is considered while designing user interfaces. Below some basic principles are mentioned.[5]

The Control-Display Gain (or CD gain) describes the proportion between movements in the control space to the movements in the display space. For example, a hardware mouse moves in another speed or distance than the cursor on the screen. Even if these movements take place in two different spaces, the units for measurement have to be the same in order to be meaningful (e.g. meters instead of pixels). The CD gain refers to the scale factor of these two movements:

The CD gain settings can be adjusted in most cases. However, a compromise has to be found: with high gains it is easier to approach a distant target, with low gains this takes longer. High gains hinder the selection of targets, whereas low gains facilitate this process.[6] The Microsoft, macOS and X window systems have implemented mechanisms which adapt the CD gain to the user’s needs. e.g. the CD gain increases when the user’s movement velocity increases[7] (historically referred to as “mouse acceleration”).

A mouse is a small handheld device pushed over a horizontal surface.

A mouse moves the graphical pointer by being slid across a smooth surface. The conventional roller-ball mouse uses a ball to create this action: the ball is in contact with two small shafts that are set at right angles to each other. As the ball moves these shafts rotate, and the rotation is measured by sensors within the mouse. The distance and direction information from the sensors is then transmitted to the computer, and the computer moves the graphical pointer on the screen by following the movements of the mouse. Another common mouse is the optical mouse. This device is very similar to the conventional mouse but uses visible or infrared light instead of a roller-ball to detect the changes in position.[8]
Additionally there is the mini-mouse, which is a small egg-sized mouse for use with laptop computers; usually small enough for use on a free area of the laptop body itself, it is typically optical, includes a retractable cord and uses a USB port to save battery life.

A trackball is a pointing device consisting of a ball housed in a socket containing sensors to detect rotation of the ball about two axis, similar to an upside-down mouse: as the user rolls the ball with a thumb, fingers, or palm the pointer on the screen will also move. Tracker balls are commonly used on CAD workstations for ease of use, where there may be no desk space on which to use a mouse. Some are able to clip onto the side of the keyboard and have buttons with the same functionality as mouse buttons.[9] There are also wireless trackballs which offer a wider range of ergonomic positions to the user.

Isotonic joysticks are handle sticks where the user can freely change the position of the stick, with more or less constant force.

Isometric joysticks are where the user controls the stick by varying the amount of force they push with, and the position of the stick remains more or less constant. Isometric joysticks are often cited as more difficult to use due to the lack of tactile feedback provided by an actual moving joystick.

A pointing stick is a pressure-sensitive small nub used like a joystick. It is usually found on laptops embedded between the G, H, and B keys. It operates by sensing the force applied by the user. The corresponding “mouse” buttons are commonly placed just below the space bar. It is also found on mice and some desktop keyboards.

The Wii Remote, also known colloquially as the Wiimote, is the primary controller for Nintendo’s Wii console. A main feature of the Wii Remote is its motion sensing capability, which allows the user to interact with and manipulate items on screen via gesture recognition and pointing through the use of accelerometer and optical sensor technology.

A finger tracking device tracks fingers in the 3D space or close to the surface without contact with a screen. Fingers are triangulated by technologies like stereo camera, time-of-flight and laser. Good examples of finger tracking pointing devices are LM3LABS’ Ubiq’window and AirStrike

A graphics tablet or digitizing tablet is a special tablet similar to a touchpad, but controlled with a pen or stylus that is held and used like a normal pen or pencil. The thumb usually controls the clicking via a two-way button on the top of the pen, or by tapping on the tablet’s surface.

A cursor (also called a puck) is similar to a mouse, except that it has a window with cross hairs for pinpoint placement, and it can have as many as 16 buttons. A pen (also called a stylus) looks like a simple ballpoint pen but uses an electronic head instead of ink. The tablet contains electronics that enable it to detect movement of the cursor or pen and translate the movements into digital signals that it sends to the computer.”[10] This is different from a mouse because each point on the tablet represents a point on the screen.

A stylus is a small pen-shaped instrument that is used to input commands to a computer screen, mobile device or graphics tablet.

The stylus is the primary input device for personal digital assistants, smartphones and some handheld gaming systems such as the Nintendo DS that require accurate input, although devices featuring multi-touch finger-input with capacitive touchscreens have become more popular than stylus-driven devices in the smartphone market.

A touchpad or trackpad is a flat surface that can detect finger contact. It is a stationary pointing device, commonly used on laptop computers. At least one physical button normally comes with the touchpad, but the user can also generate a mouse click by tapping on the pad. Advanced features include pressure sensitivity and special gestures such as scrolling by moving one’s finger along an edge.

It uses a two-layer grid of electrodes to measure finger movement: one layer has vertical electrode strips that handle vertical movement, and the other layer has horizontal electrode strips to handle horizontal movements.[11]

A touchscreen is a device embedded into the screen of the TV monitor, or system LCD monitor screens of laptop computers. Users interact with the device by physically pressing items shown on the screen, either with their fingers or some helping tool.

Several technologies can be used to detect touch. Resistive and capacitive touchscreens have conductive materials embedded in the glass and detect the position of the touch by measuring changes in electric current. Infrared controllers project a grid of infrared beams inserted into the frame surrounding the monitor screen itself, and detect where an object intercepts the beams.

Modern touchscreens could be used in conjunction with stylus pointing devices, while those powered by infrared do not require physical touch, but just recognize the movement of hand and fingers in some minimum range distance from the real screen.

Touchscreens are becoming popular with the introduction of palmtop computers like those sold by the Palm, Inc. hardware manufacturer, some high range classes of laptop computers, mobile smartphone like HTC or the Apple Inc. iPhone, and the availability of standard touchscreen device drivers into the Symbian, Palm OS, Mac OS X, and Microsoft Windows operating systems.

In contrast to a 3D Joystick, the stick itself doesn’t move or just moves very little and is mounted in the device chassis. To move the pointer, the user has to apply force to the stick. Typical representatives can be found on notebook’s keyboards between the “G” and “H” keys. By performing pressure on the TrackPoint, the cursor moves on the display.[12]


4.5 (384 ratings)


29K Students Enrolled

This Course

Video Transcript

what is the function of the pointer on a micr

Embedded Software and Hardware Architecture is a first dive into understanding embedded architectures and writing software to manipulate this hardware. You will gain experience writing low-level firmware to directly interface hardware with highly efficient, readable and portable design practices. We will now transition from the Host Linux Machine where we built and ran code in a simulated environment to an Integrated Development Environment where you will build and install code directly on your ARM Cortex-M4 Microcontroller. Course assignments include writing firmware to interact and configure both the underlying ARM architecture and the MSP432 microcontroller platform. The course concludes with a project where you will develop a circular buffer data structure.

In this course you will need the Texas Instruments LaunchPad with the MSP432 microcontroller in order to complete the assignments. Later courses of the Specialization will continue to use this hardware tool to develop even more exciting firmware.

4.5 (384 ratings)


Sep 8, 2020

This is a very wonderful course. The instruction was perfectly delivered, and I can see myself going places with what I have learned here so far.


Jul 28, 2019

The perfect building of concepts by Mr. Alex. If you want to get your basics strong, this is the course you need to attend.

From the lesson

Manipulating Memory

Module 2 will introduce the learner to more advanced firmware techniques as well move us into some hands on firmware for the microcontroller. We start by building our own memory access methods that will allow a programmer to manipulate peripheral memory bit fields to configure microcontroller peripherals and core architecture concepts. This will include more complex use of pointers for register definition files and function pointers for interrupt vector tables The module concludes with an in-depth look into the features of on-target debugging on a microcontroller and a hands-on example.


1 0 obj
2 0 obj
3 0 obj
>/ExtGState>/XObject>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/MediaBox[ 0 0 612 792] /Contents 4 0 R/Group>/Tabs/S/StructParents 0>>
4 0 obj
x[Yo#7~7t;A2H&;^ItFnGj3~d7$Iنی نی سایت

what is the function of the pointer on a micr
what is the function of the pointer on a micr

more know:  80 mm is how many inches

Leave a Reply

Your email address will not be published.