In the modern age, we are surrounded by technology that has become so integrated into our daily lives that it often fades into the background. However, every so often, we stop and look closely at a device, a piece of infrastructure, or a digital phenomenon and ask: “What are those things?” Whether it is the strange black circle on the back of an iPhone, the odd-looking cylinders on power lines, or the invisible data packets that allow a website to load in milliseconds, our world is built upon layers of sophisticated engineering that remain largely mysterious to the average user.

This article pulls back the curtain on the “hidden” side of technology. By exploring the physical sensors, digital protocols, and artificial intelligence frameworks that drive our contemporary existence, we can better understand the complex ecosystem that supports our digital habits.
The Physical Layer: Sensors and Hardware You See Every Day
Much of the technology we interact with is hiding in plain sight. From the smartphones in our pockets to the autonomous vehicles on our streets, specific hardware components perform critical tasks that most of us take for granted.
LiDAR and the Rise of Spatial Awareness
If you have looked at the camera array of a high-end smartphone or noticed a spinning cylinder on top of a self-driving car, you have seen a LiDAR (Light Detection and Ranging) sensor. But what is it? Unlike traditional cameras that capture light to create a 2D image, LiDAR emits rapid pulses of laser light to measure distances. By calculating how long it takes for the light to bounce off an object and return to the sensor, the device creates a “point cloud”—a high-resolution 3D map of the surrounding environment. This technology is what allows your phone to measure a person’s height instantly or an autonomous vehicle to navigate a busy intersection safely in total darkness.
MEMS: The Micro-Machines in Your Pocket
When you tilt your phone to play a racing game or watch a video in landscape mode, the screen rotates automatically. This is made possible by MEMS (Micro-Electro-Mechanical Systems). These are microscopic machines—some no larger than a grain of sand—etched into silicon chips. Inside your phone, MEMS accelerometers and gyroscopes detect movement and orientation. These “things” are essentially tiny physical structures that bend or vibrate in response to motion, converting physical force into electrical signals. Without them, modern mobile interfaces and drone stabilization would be impossible.
Haptic Engines: The Illusion of Touch
Have you ever clicked a trackpad on a modern laptop and felt a satisfying “thump,” only to realize later that the trackpad didn’t actually move? Or felt a nuanced vibration when you scrolled through a digital wheel on your watch? Those sensations are created by haptic engines (specifically Linear Resonant Actuators). These are specialized motors designed to move with extreme precision and speed. They create “tactile feedback,” tricking your brain into thinking you are interacting with a physical button when you are actually touching a solid piece of glass or metal.
The Invisible Infrastructure: Digital Protocols and Hidden Software
While the hardware is often visible, the most complex “things” in our tech world are the invisible digital structures that facilitate global communication. These are the software layers and protocols that ensure your data gets from point A to point B securely and efficiently.
Edge Computing: Bringing the Cloud Closer
We often talk about “the Cloud,” but the Cloud is not a nebulous entity in the sky; it is a collection of massive data centers. However, as we demand faster response times for things like online gaming and video streaming, “Edge Computing” has emerged. You might notice small, unassuming gray boxes mounted on 5G poles or tucked into street corners. These are Edge servers. Instead of sending your data request to a server thousands of miles away, Edge computing processes data at the “edge” of the network, closer to the user. This reduces latency—the delay between an action and a response—making the internet feel instantaneous.
Tracking Pixels and Web Beacons: The Eyes of the Internet
When you browse a website and later see an ad for the exact product you were looking at on a different platform, you have encountered a tracking pixel. These “things” are usually 1×1 invisible images embedded in emails or websites. When your browser loads the image, it sends a ping back to a server, identifying your IP address, your device type, and the fact that you visited that specific page. While invisible to the naked eye, these beacons are the backbone of the modern digital advertising economy, allowing brands to follow your journey across the web.
API Gateways: The Silent Connectors
Have you ever wondered how a travel website can pull prices from dozens of different airlines simultaneously? They use APIs (Application Programming Interfaces). An API is a set of rules that allows one piece of software to talk to another. When you use a “Log in with Google” button on a third-party app, you are witnessing an API in action. It is the invisible handshake that allows different digital ecosystems to share data securely without you ever seeing the underlying code.

The AI Black Box: Deciphering Algorithmic “Things”
Artificial Intelligence is frequently discussed, but the specific components that make AI work—the “things” inside the model—are often misunderstood. Understanding these concepts is key to navigating a world where AI is becoming the primary interface for information.
Tokenization: How Machines Read
When you type a prompt into a tool like ChatGPT, the AI doesn’t see “words” in the way humans do. It sees “tokens.” Tokenization is the process of breaking down text into smaller chunks—sometimes whole words, sometimes just fragments of characters. These tokens are then converted into numerical vectors. This transformation is what allows the AI to calculate the mathematical probability of which word should come next. When people ask “What are the building blocks of AI?” the answer is tokens.
Computer Vision and Feature Extraction
How does a smart security camera know the difference between a swaying tree branch and a human intruder? It uses a process called feature extraction within a computer vision model. The “things” the AI looks for are specific patterns: the vertical line of a human torso, the circular shape of a head, or the specific gait of a person walking. By layering these features, the AI can categorize visual data in real-time. This technology is the foundation of facial recognition, medical imaging analysis, and even the “filters” used on social media apps.
Neural Networks: The Layers of Logic
At the heart of modern AI are neural networks—computational models inspired by the human brain. These are composed of “layers” of nodes. As data passes through these layers, the network assigns “weights” to different pieces of information, gradually narrowing down the correct output. While you can’t see a neural network, its architecture determines how “smart” an AI tool feels. The move toward “Deep Learning” simply means adding more and more layers to these networks, allowing them to handle increasingly complex tasks like language translation and creative writing.
The Future of “Things”: Connectivity and Integration
As we look toward the next decade, the “things” we encounter will become even more integrated into our environment, driven by new standards and emerging technologies that aim to make the digital world feel even more natural.
The Matter Standard and the Unified Smart Home
If you have ever tried to set up a smart home, you’ve likely been frustrated by devices that won’t talk to each other—a Philips light bulb that won’t work with a Google hub, for example. Enter “Matter.” You might start seeing a new triangular logo on gadget packaging. This is a universal connectivity standard that allows devices from different manufacturers to work together seamlessly. It is the industry’s attempt to solve the “fragmentation” problem, ensuring that the “things” you buy today will still work with the ecosystem you build tomorrow.
Digital Twins: The Virtual Shadows
In industrial tech and urban planning, a new “thing” is emerging: the Digital Twin. This is a real-time virtual representation of a physical object or system—like a jet engine or an entire city’s power grid. Using sensors and IoT (Internet of Things) data, the digital twin mimics the behavior of its physical counterpart. This allows engineers to predict when a part will fail before it actually breaks, or to simulate how a new skyscraper will affect wind patterns in a city center.
Augmented Reality (AR) Overlays
We are moving toward a world where the question “What are those things?” might be answered by the objects themselves. Through AR glasses or smartphone displays, digital information will be overlaid onto the physical world. In the near future, you might point your device at a piece of machinery or a complex intersection, and digital “labels” will appear, explaining exactly what you are looking at.

Conclusion: Knowledge as the Ultimate Interface
The world of technology is often intimidating because so much of it is hidden behind sleek glass screens and complex jargon. However, when we take the time to identify “those things”—the LiDAR sensors, the tracking pixels, the tokens, and the MEMS—the world becomes a little more legible and a lot more fascinating.
Understanding the “how” and the “why” behind these technologies empowers us as consumers and citizens. It allows us to make informed decisions about our privacy, to troubleshoot the gadgets we rely on, and to appreciate the incredible engineering feats that allow us to carry the sum of human knowledge in our pockets. As technology continues to evolve, the physical and digital “things” around us will only become more sophisticated, but the curiosity to ask “what are they?” remains our best tool for navigating the future.
aViewFromTheCave is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com. Amazon, the Amazon logo, AmazonSupply, and the AmazonSupply logo are trademarks of Amazon.com, Inc. or its affiliates. As an Amazon Associate we earn affiliate commissions from qualifying purchases.