Skip to Content
Exit

Author Archives: generaldigital

  1. How Independent Verification & Validation Uncovers the Unknown

    Leave a Comment

    We we think about air travel today, we often associate it with flight delays, long security lines, and uncomfortable seats. However, in the early days of aviation, flying was considered more luxurious.

    Perhaps the first plane to provide this type of experience was the de Havilland Comet. The Comet was debuted by British Overseas Airways Corporation (BOAC) in 1952, and came equipped with spacious reclining seats, a galley to serve hot food, table seating for groups, large windows—even a bar. Most importantly, the Comet was the first commercial plane to utilize jet engines, providing a smoother, quieter alternative to the propeller engines that were typical of the time.

    A New Era of Aviation

    The Interior of the de Havilland CometThe Interior of the de Havilland Comet

    The design of the Comet was one of the most heavily scrutinized processes in the history of aviation, taking six years from the initial proposal before a prototype was designed and built. To ensure the commercial success of the airliner, BOAC specified that the plane required a transatlantic range with room for 36 passengers and a cruising altitude of 40,000 feet. As the plane utilized new engine technology, both the engines and the airframe had to be designed from scratch, and after trying and failing to experiment with a tailless design, eventually the plane began to look more similar to a jet one might see today. After being unable to adequately perform stress testing on the fuselage, de Havilland invested in a water tank to simulate the stress of 40,000 hours of flight time.

    When the Comet took its first commercial flight in 1952, every conceivable measure had been taken to ensure that the plane would safely carry passengers for decades. Despite a few incidents early on, it seemed like de Havilland had found a great commercial success. The use of jet engines cut travel times in half, and other airlines were eager to add the new plane to their fleet.

    Tragedy Strikes

    Memorial for the victims of BOAC Flight 781Memorial for the victims of BOAC Flight 781

    That success came to a crashing halt on January 10, 1954, when BOAC Flight 781 exploded in mid-air, killing all 35 passengers and crew on board. Immediately following the crash, de Havilland pulled their fleet of Comets until a cause was found. However, as the investigation was ongoing, the company was eager to get their new jet back in the air. After ruling out a bomb, early signs pointed to an engine as a potential source of the explosion, and after retrofitting the planes, the Comet was flying again—just ten weeks after the loss of Flight 781. Barely two weeks after the Comet was reinstated, South African Airways Flight 201 suffered a similar explosion off the coast of Italy, killing all 21 passengers and crew. It was clear that this plane was unsafe to fly until a definitive cause was found.

    Engineers at de Havilland and BOAC were baffled—they had ruled out an explosive device as the cause, and could now rule out engine failure as well. After the initial crash, metal fatigue had been waived off as a potential cause. During the design and testing phase, de Havilland had gone above and beyond to simulate the effect of constant takeoff and landing cycles on the plane, and it had well exceeded the safety standards of the time.

    The issue, however, was that traditional propeller-based aircraft (with the exception of the Boeing 307) flew at a cruising altitude of about 10,000 feet or lower to ensure passengers had adequate oxygen to breath. Meanwhile, the jet-powered Comet flew at 40,000 feet and featured a pressurized cabin, resulting in the plane experiencing much harsher conditions than engineers considered at the time. Moreover, the large square windows installed to give passengers additional comfort actually created additional stress on the frame, with cracks forming at the window and spreading through the cabin before eventually causing failure.

    The Unknown Unknown

    This example is one of the most well-known manifestations of an “unknown unknown,” or something we didn’t know that we didn’t know. Every possible measure was taken to ensure the planes safety, but factors not known at the time caused the plane to fail anyway.

    Regrettably, in safety-critical applications, trial-and-error has traditionally been the way to enhance safety. We push the boundaries of science (and regulatory compliance) to their limits, and releasing a product that we think is safe—until tragedy teaches us it is not.

    But this doesn’t have to be the case.

    Through a process called Independent Verification & Validation (IV&V), we can uncover possibilities that may not have been considered during the product development lifecycle. While the FAA and FDA require IV&V for all safety-critical applications, just because something is certified does not mean it’s safe.

    When performed properly, IV&V can help determine gaps you didn’t even know existed. A thorough requirements analysis as well as unit and systems testing can reduce risk by discovering errors before they occur in the field.

    Of course, there’s no guarantee that a thorough testing process will catch every possible flaw. There will always be unknown unknowns. However, taking the process seriously can help identify things you wish you had known before they become a concern.

    Testing software concept. Application code test process. Man searching for bugs. Idea of computer technology. Digital analysis. Vector illustration in cartoon style
    Testing software concept. Application code test process. Man searching for bugs. Idea of computer technology. Digital analysis. Vector illustration in cartoon style.

  2. Why ChatGPT Isn’t Replacing Humans Anytime Soon

    Leave a Comment

    If you read our last software blog, Challenges of Independent Verification & Validation of Intelligent Systems, you might have noticed that it seemed a little more…robotic than usual.

    That’s because we let ChatGPT write it for us.

    What is ChatGPT?

    ChatGPT is a new AI tool that allows users to basically talk to a robot. But it’s much more than just a simple word generator—ChatGPT has the ability to write songs and essays, answer test questions, write and debug computer programs, and even play tic-tac-toe. And it’s taken the world by storm, amassing over one million users in the first few days it was made public.

    This type of technology is not new (ChatGPT, for example, is a new version of a product called InstructGPT that was released in 2015), but it is regarded as the most powerful AI language model ever released to the public. And it’s raising concerns from academia to business about the tool’s ability to plagiarize essays, create phishing emails, and replacing workers in positions like sales, marketing, and customer service.

    Are those concerns valid?

    ChatGPT’s immense knowledge base makes it an alluring tool for tech-savvy students looking to have their homework done for them. To an untrained eye, the tool creates high quality work—good enough to pass the bar, medical school exams, and even the final for Wharton’s MBA program. ChatGPT’s ability to not just create content, but also tailor that content and make iterations when given specific instructions, makes it hard to separate human-written from AI-generated text.

    Why ChatGPT Isn’t Replacing Humans Anytime Soon
    Our initial conversation with ChatGPT
    For example, in creating our blog last month, we first asked ChatGPT, “Write a blog about Independent Verification and Validation of safety-critical artificial intelligence and machine learning software systems.” The content, while thorough and informative, was not incredibly interesting, as it was just a canned description of what IV&V is. So then we asked, “Rewrite it focusing on the challenges involved with the verification and validation of intelligent systems,” which produced a more engaging piece. Then we said “Now integrate those two into a complete post,” and voilà! Blog done.

    Was it a great blog? Not really. If you read it not knowing it was a computer, you’d probably think it was just an average writer producing pretty good content. But on further inspection, you can see the chinks in the armor. Despite being a blog, the text is more polished and direct than colloquial and human. Removing the “Independent” from my second prompt created misalignment, with the tool not sure whether to refer to it as “V&V” or “IV&V.” The pictures and headers, I still had to create myself. With refinement, we probably could have tweaked the blog further.

    So, are humans safe from the robots?

    Why ChatGPT Isn’t Replacing Humans Anytime Soon
    ChatGPT is very popular—causing it to be often inaccessible due to server constraints
    The thing about an AI language model like ChatGPT is that at the end of the day, you still need a person there to prompt the tool and steer whatever you’re using it to build in an optimal direction. That’s why ChatGPT is a tool, but not a replacement, for most functions. Concerns about plagiarism have sparked developers to create their own tools to scan work for AI-generated content, and parent company OpenAI’s intention to eventually charge for ChatGPT will likely quell the influx of college kids having robots do their homework.

    Even if it was capable of truly simulating a human—you would still need a human to guide the tool, meaning it won’t ever really replace workers, it will only ever empower them to perform even more efficiently by leveraging new technology. Just like we always have.

    Why ChatGPT Isn’t Replacing Humans Anytime Soon
    ChatGPT is very popular—causing it to be often inaccessible due to server constraints

  3. How Does a Display Actually Work?

    Leave a Comment

    Most of us spend more time than we should staring at a display. It’s a big part of our jobs, and our daily lives outside of work (you’re probably reading this on a display right now!). For those in safety- and mission-critical applications, displays even play life-saving roles. But did you ever wonder, how do displays actually work? Today we’ll take a look at what’s happening behind the screens.

    How Does a Display Actually Work?

    In simplest terms, a display takes information, or digitized data, from a source and puts it on a screen. The data can travel via various formats such as HDMI, USB-C, EDI, and even wireless connectivity such as Bluetooth®.

    However the signal is communicated, the monitor then essentially “sees” what the image looks like, and puts it up on a screen. But a lot happens in between. In essence, the monitor must convert what is coming in to accommodate the screen. Within milliseconds, it “looks” at the image in electronic format and transforms the attributes. Several are taken into consideration, starting with video formats. These include HD, 4K, 8K, etc. With each of these, image fidelity increases. Other key factors include color, size, and speed. All of these must be processed before the monitor can actually display the image onto the screen. Keep in mind, not all images are built equal.

    When an image is being converted into frames, it’s creating a snapshot. Just imagine frame-by-frame animation, it’s the same concept. Several components are working very quickly to create these images as follows:

    Decoder – decodes the input signal to the display’s native format

    Scaler – takes the image and makes sure it fits the screen

    Renderer – transform whatever color the native images have into a color format that the display could understand

    Ditherer – approximates colors and adds color depth by diffusing pixels of available colors

    Let’s Zoom in Closer

    Ultimately, the image is made up of tiny pixels. A 1920×1080 (HD) screen has about 2 million of them! Each pixel is comprised of three colors – red, green and blue (RGB). Each color within the pixel has a brightness value between 0 and 255, which determines how intense each color will be. The three values will then determine the color of the individual pixel. For instance, if the red value is 255, green is 0 and blue is 255, the pixel would be a mix of red and blue, better known as purple.

    How Does a Display Actually Work?

    The individually colored pixels then create the composite image, similar to mosaic artwork. (There are over 16 million RGB combinations, each representing a unique color.) A backlight is used to illuminate the image. Recent technology allows for individual backlight regions. These allow for higher contrast, and improved picture quality overall, particularly for low ambient light applications.

    LED vs OLED

    Technically speaking, LED displays are actually LCD displays, with LED backlights. LCDs can not produce their own light, this is where the LED backlight comes in. The light emitted from the LED passes through several filters to create color combinations and ultimately render the image. With OLEDs, the LEDs themselves transmit the light, and act as the color filter. As a result, an OLED can produce richer and more accurate black levels. Further, the display’s overall profile is thinner, since OLEDs don’t require layers of filters. While LEDs are bulkier, they produce better white light, and are therefore much brighter.

    The Tech is Evolving

    How Does a Display Actually Work?

    Display technology continues to evolve rapidly. But keep in mind that a display relies on other hardware and software to function such as the graphics card, its processor, and other factors. In order to get the best out of a particular display, all of the components in the system must be up to snuff. So, if you are sourcing cutting-edge display technology, such as adaptive backlighting, be careful. Ask lots of questions to be certain the entire system is designed to give the desired output, both figuratively and literally.

  4. Challenges of Independent Verification & Validation of Intelligent Systems

    Leave a Comment

    (This piece was written by ChatGPT. Learn more about this experiment here)

    Independent Verification and Validation (IV&V) is an important process for ensuring the safety and reliability of artificial intelligence (AI) and machine learning (ML) software systems that are used in safety-critical applications. These applications include self-driving cars, medical devices, military systems, and aviation systems, among others.

    What are the challenges?

    Challenges of Independent Verification & Validation of Intelligent Systems

    However, the verification and validation (V&V) of intelligent systems presents a number of challenges that must be carefully considered and addressed. One challenge is the complexity of these systems, which can be difficult to fully understand and test. Intelligent systems often involve complex algorithms and data sets that can be difficult to analyze and understand. This can make it challenging to design appropriate test cases and evaluate the system’s behavior under different conditions.

    Another challenge is the potential for these systems to exhibit unpredictable or unexpected behavior. Intelligent systems may learn and adapt over time, which can lead to changes in their behavior that are difficult to predict. This can make it difficult to ensure that the system is operating safely and reliably, particularly in safety-critical applications such as self-driving cars or medical devices.

    In addition, there is a risk that these systems may be biased, either due to the data they are trained on or the algorithms used to process that data. This can lead to unfair or biased outcomes, which can have significant consequences in applications such as hiring or lending decisions. Ensuring that intelligent systems are unbiased and fair is therefore a critical aspect of V&V.

    Finally, the rapid pace of development and innovation in the field of AI and ML can make it difficult to keep up with the latest techniques and approaches. This can make it challenging for V&V teams to stay up-to-date and ensure that they are using the most appropriate methods and tools for the task at hand.

    How do we address them?

    Challenges of Independent Verification & Validation of Intelligent Systems

    To address these challenges, IV&V involves a series of activities, including testing, inspections, and reviews, that are performed by a team of experts who are independent of the development team. Testing involves executing the software system under various conditions to ensure that it behaves as expected. Inspections involve reviewing the design and code of the software system to identify potential issues or weaknesses. Reviews involve reviewing the software development process itself to ensure that it follows best practices and meets industry standards.

    Overall, the goal of IV&V is to provide an independent assessment of the safety and reliability of AI and ML software systems, which is critical for ensuring the safe and reliable operation of these systems in safety-critical applications. By performing testing, inspections, and reviews, IV&V helps to ensure that the software system meets the required specifications and operates safely and reliably, reducing the risk of accidents or failures.

    Challenges of Independent Verification & Validation of Intelligent Systems

  5. Four Examples of SiMD IV&V Gone Wrong

    Leave a Comment

    When we go to a hospital or doctor’s office to be treated, we know that sometimes things might not go perfectly. Typically, we worry about the human elements. What happens if I don’t respond to the prescription my doctor gives me? What if a mistake during a routine surgery leaves me injured?

    However, one all-too-real possibility is the device being used to perform life-saving treatment fails when you need it most. As medical devices grow more complex, so too does the software that drives them. While FDA medical device regulations are among the strictest in the world, devices can still get to market with hidden problems that have potential to cause great harm if not found quickly.

    Four Examples of SiMD IV&V Gone Wrong
    A well executed Independent Verification & Validation plan can ensure your software doesn’t cause problems in the field

    When they do, they are subject to a recall—a costly endeavor that sometimes requires all devices to be repaired and replaced. According to McKinsey & Company, “non-routine quality events” cost the medical device industry an average of $2.5-$5 billion per year, with an average of one major quality event per year that results in a 13% stock price drop across the industry. It’s a big problem, and it isn’t going away any time soon. Medical devices recalls hit a two year high in Q2 of this year, with safety and software issues the leading concerns.

    One way the FDA prevents this is through mandating Independent Verification & Validation (IV&V) for all Class II and Class III devices. Essentially, IV&V is a 3rd party running tests and producing documentation to show regulators that the product will perform as intended. While this process helps catch most problems before they occur in the field, sometimes things slip through the cracks.

    There are two main categories of medical device software: Software as a Medical Device (SaMD) and Software in a Medical Device (SiMD), also referred to as embedded software. As the applications and testing procedures for these categories are slightly different, for the purposes of our discussion let’s focus on SiMD.

    Four Examples of SiMD IV&V Gone Wrong

    SpaceLabs’ Arkon anesthesia delivery system
    Trouble started with the system as early as 2013, when reports began to trickle in claiming that the system would, without warning, enter a “failure state” that would render the device inoperable—even if the device was in use. This was a serious concern, as it required intervention to manually deliver oxygen and gas to the patient, potentially causing Hypoxemia and death in severe instances.

    Four Examples of SiMD IV&V Gone Wrong
    The troubled Arkon Anesthesia Delivery System

    In 2014, the FDA issued a class one recall of the device, mandating a software update to solve the issue and strongly advising against its use until doing so. Sixteen units were affected by this action.

    Unfortunately, the root cause of the issue was never truly addressed, and the FDA was forced to issue another recall of the device in 2017—this one larger in scope, with 110 units requiring immediate upgrades.

    Despite all the time and money spent developing this device, SpaceLabs was ultimately forced to discontinue the production and sale of the entire product line in 2019, facing additional recalls in other countries such as Canada. Devices still in the field will cease to be supported sometime in 2026.

    While we’ll never know what exact issues caused this device to ultimately fail, had it been caught early on in the development phase instead of in the field, it may have been able to keep the product on the market.

    Vyaire Medical Bellavista 1000/1000e Series Ventilators
    During the COVID-19 pandemic, ventilators made international news, drawing attention to their life-saving ability to provide oxygen to those unable to breathe on their own.

    But what if one of those ventilators were to fail, causing the patient to suddenly need to take an unassisted breath? That’s exactly what happened with Vyaire Medical Bellavista’s 1000/1000e Series Ventilators.

    With pressure on ventilator manufacturers to increase production and optimize their systems, a software update intended to make the system safer had the opposite effect. After installing software version 6.0.1600.0, a memory conflict would occur when setting the data communication to a specific port, triggering an alarm and causing the device to malfunction.

    While there were thankfully no fatalities from this incident, it was still one of the more serious events in recent years with 18 complaints and seven injuries.

    The adverse reaction of the existing device to the software update hammers home the need for thorough IV&V that treats an upgraded device as a brand new one, subject to the same rigorous testing procedures that ensure the device will function when it matters most.

    Covidien Puritan Bennett 840 Series Ventilator
    In December of 2013, this ventilator was recalled due to a software issue that would trigger a diagnostic code, ceasing all function and triggering a safety alarm. Because ventilators are typically only used for patients that can’t breathe on their own, this issue was obviously taken seriously, given another Class 1 designation by the FDA.

    While this issue was fixed relatively quickly, the device has been recalled multiple times in the years following for various problems, even after the 2015 acquisition of Covidien by Medtronic. It is still on the market today, deployed in hospitals and care centers around the world.

    So what went wrong? Based on publicly available information, it seems like a software issue that thorough unit testing would have caught during IV&V.

    Medtronic Synergy Cranial Software and Stealth Station S7 Cranial Software
    Another repeat offender was Medtronic’s neurosurgery navigation tool. The Synergy Cranial Software and Stealth Station system allows surgeons to navigate complex procedures with precision by providing 3D images of a patient’s brain to clearly identify anatomical structures and surgical tools.

    Medtronic Synergy Cranial Software and Stealth Station S7 Cranial Software
    A representation of the Stealth Station system

    In 2018, reports began to surface of the navigational software not aligning correctly, causing potential for serious harm. Neurosurgeons using the software as a guide would see that their tool had not yet reached its intended target, when in reality it had. When this occurs, the surgeon could potentially insert the tool too deeply and damage the brain.

    5,487 devices were recalled in 2019, with an additional 943 devices recalled due to a similar issue in 2021. Despite initially receiving regulatory approval, this type of software error likely could have been avoided through more diligent unit testing that accounted for all potential variables.

    How can I avoid a recall?

    A common thread throughout these events was that each situation resulted from a software testing plan that wasn’t designed or executed properly. It shows that the FDA and other regulators aren’t perfect, and that regulatory approval does not equal a device is 100% safe from causing harm and exposing manufacturers to liability.

    The best way to avoid being on a list like this is to build Independent Verification & Validation into your product launch strategy from the very beginning. Understanding the limitations of safety-critical applications can help structure your product development in a way that is safe, sustainable, and profitable.

    For assistance in preparing your software testing plan, check our our Independent Verification & Validation services or speak with an expert today.

    Four Examples of SiMD IV&V Gone Wrong

  6. NVIS Display Selection Guide

    Leave a Comment

    Certain operating environments for ruggedized systems call for the need to be unseen. This is where Night Vision Imaging Systems (NVIS) come in. In simplest terms, these infrared-driven solutions allow operators to use their tactical equipment, without emitting significant light. As you could imagine, the most popular application is military. These systems are found in tanks, field command posts, on ships, etc. Essentially, NVIS is used anywhere the operator doesn’t want to be seen. This also includes industrial, marine, medical, avionics, and other uses.

    You’ve likely seen NVIS technology used in action movies or ads for the armed forces. Despite how cool those night vision goggles look, at the heart of every NVIS system is the display. Many LCDs today feature LED backlights, which produce a modulated frequency of light. Users can see the screen, without getting overwhelmed. They maintain a low light environment, which avoids the disruption of natural light. Since the applications for these systems are wide and varied, an extensive range of attributes and options are available.

    NVIS Display Selection Guide
    A blackout switch on this combat-ready Barracuda helps prevent giving away your position to the enemy

    Let’s break down some of the bigger ones:

    Ruggedization Factors – these refer to any specialty design consideration that is driven directly by the application’s physical environment. Will it be used in a dusty desert? Deep under the sea? In a tropical climate? These factors will determine if the monitor needs to be sealed, or perhaps requires specialty glass or other material/design considerations.

    For applications where operators may be on the bridge of a ship or on a flight deck, EMI is a concern. This can be reduced or eliminated with the addition of an optically bonded vandal shield.

    Screen Size and Resolution — as we’ve mentioned in the past, resolution doesn’t tend to be a huge driver of cost. That said, you’ll always want to get the best resolution available. As for physical size, it is dictated by the application itself. Just imagine: a shoulder mounted missile system could only accommodate a small display unit, while a spacious command center would demand something much larger.

    Mounting/Enclosure — does the unit need to flip-down from the ceiling, will it stand alone or perhaps “plug in” to a much larger workstation? Rack or panel mounted? Answers to these questions will determine the type of enclosure and mounting options that are best for the application.

    Connectors — this one might be obvious, but certainly worth mentioning. When sourcing an NVIS LCD, the buyer needs to make sure they are selecting one with outputs that are compatible with the system’s input requirements. They’ll want to avoid the use of converters or adaptors when possible, to help keep the system more efficient, while saving some space and money.

    Australian Army soldiers from 1st Battalion, Royal Australian Regiment, test the new Mark 47 L40-2 lightweight automatic grenade launcher at Port Wakefield in South Australia on 2 September 2016.
    While the above-mentioned factors are the bigger ones, there are many other options on the market to help accommodate the range of use cases. Here are a few:

    • Blackout switch — In certain situations, some light could be visible when it’s pitch black, particularly in open environments. Hit the blackout switch, and all will immediately shut off.
    • NVIS/Daylight mode shifting — For many applications, the equipment is used during day and night. To accommodate both, the user will need the ability to control intensity and frequency. Dual mode monitors can facilitate this by using two LED back rails for illumination, one for each mode. The user can easily switch from one to the other.
    • Antireflective/Antiglare Coatings — as the name suggests, these will reduce reflections and glare which can prevent the display from being readable in bright settings.

    As with many mission and safety-critical applications, component attribute selections are determined by the environment itself. Since many options are available, it’s important to specify a solution that, at very least, hits your minimum requirements. In some cases, you’ll need to try and prepare for the more extreme scenarios to optimize the equipment’s usability. But be sure to analyze and accommodate all potential factors and related options before making your final selection. The mission’s success could depend on it!

    Australian Army soldiers from 1st Battalion, Royal Australian Regiment, test the new Mark 47 L40-2 lightweight automatic grenade launcher at Port Wakefield in South Australia on 2 September 2016.

  7. Pros and Cons of Agile Development for Safety-Critical Applications

    Leave a Comment

    Agile software development methods are growing increasingly popular in product development. You can see why—an emphasis on speed, collaboration, and innovation sounds like a great concept.

    However, in safety-critical applications, there are concerns that agile methodology is insufficient in ensuring that the end product will not malfunction out in the field.

    So do those concerns have merit? Let’s take a deeper dive into the basic tenets of agile development, and the pros and cons of it’s implementation for safety-critical systems.

    What is Agile?
    Agile is a methodology that emphasizes collaboration and constant iteration with the end goal of developing the most innovative system possible. It was born by a group of developers that found traditional software development processes (eg. Software Development Lifecycle, Waterfall) to be too restrictive, limiting the ability to make changes on the fly if desired. The Agile Manifesto of 2000 cites four core values:

    1. Individuals and interactions over processes and tools
    2. Working software over comprehensive documentation
    3. Customer collaboration over contract negotiation
    4. Responding to change over following a plan

    Essentially, agile development works by breaking a project down into several stages—with each stage providing an opportunity for feedback and collaboration from different stakeholders. There are several different types of agile methodologies, such as Extreme Programming (XP), Scrum, Lean and Kanban, Crystal, Dynamic Systems Development Method (DSDM), and Feature-Driven Development (FDD), with XP and Scrum being the most popular. However, all methods follow the same general principles:

    1. Customer satisfaction
    2. Early and continuous delivery
    3. Embrace change
    4. Frequent delivery
    5. Collaboration of businesses and developers
    6. Motivated individuals
    7. Face-to-face conversation
    8. Functional products
    9. Technical excellence
    10. Simplicity
    11. Self-organized teams
    12. Regulation, reflection, and adjustment

    As you can see, with agile development, the development lifecycle never really ends. There is an emphasis on continuous iteration and improvement with the end goal of building the best possible product—and keeping it that way.

    However, in safety-critical certified environments, is this the best approach?

    Agile vs Traditional Approach
    Compared to agile methodology, traditional software development approaches, such as System Development Lifecycle (SDLC), Waterfall, or V-Model are more rigid in structure.

    Pros and Cons of Agile Development for Safety-Critical Applications

    Considering international standards such as IEC 62304, this rigidity can be interpreted as a strength. For example, in the Waterfall model pictured above, a linear plan is followed where each stage of the process must be thoroughly vetted and approved before proceeding to the next stage. A new requirement presented during the implementation phase will not be considered, as the requirements phase has already been completed—changing a requirement at this stage would force the design stage to be repeated/altered, creating inefficiency.

    Regulators like the FDA and FAA tend to like traditional approaches, as they provide clear requirement traceability and make clear to auditors that all possible considerations have been taken to ensure a product is safe. Contrasting this with the agile approach, new requirements that are slipped into a product development lifecycle (and more importantly, their impact on the existing system) may not be thoroughly verified and validated, potentially causing quality issues in production—if not found by regulators first.

    Pros and Cons of Agile Development for Safety-Critical Applications

    However, when taking the ability to innovate into consideration, the rigidity of traditional software development approaches can also be considered a weakness. By emphasizing collaboration and innovation, new ideas can be added to a product being developed to allow it to fully realize its market value. It can also save on costs, as the implementation of new features or improvements does not require a full restart of the development cycle.

    So what’s the best approach?
    As with anything, the real answer to what the best software development approach is that it depends on your application. In highly-regulated industries where quality control is paramount, a traditional approach can help prove to regulators that you take safety seriously. However, in highly-competitive industries where innovation is simply a cost of doing business, you may want to consider implementing more agile methodologies to keep up.

    Regardless of how you develop your software, as long as you can prove to regulators that you can verify and validate that you built your system correctly, you should be able to get to market successfully. If you have a safety-critical device that needs regulatory approval, contact our Independent Verification and Validation team to help you get there.

    Pros and Cons of Agile Development for Safety-Critical Applications

  8. Five Benefits of Working with US-Based Manufacturers

    Leave a Comment

    Over the last several decades, the commercial world has become smaller and smaller. While there are many contributing factors, trade policies and technological advances such as the Internet are considered the biggest ones. Regardless of the exact influences, they all add up to the ability for you to buy products made just about anywhere in the world. So, since you can source globally, why should you stick with US-based manufacturers? Let’s review some of the key benefits:

    1) Better Quality

    Many applications require QMS Certifications such as ISO 9001
    To be fair, it would be misleading to simply state, “If it’s made in the US, it’s a quality product. If it’s made overseas, it’s not.” Quality happens at the company level, not the national level. However, US companies as a whole tend to be more quality conscious, with stringent quality assurance programs in place. As we’ve discussed in a previous blog, quality can not be sacrificed in favor of lower prices for ruggedized applications. Systems and components must perform reliably and consistently all the time. This reality is only achieved when a manufacturer takes quality very seriously. Practices are in place to ensure quality every step of the way. This is why US manufacturers today have entire departments and teams dedicated to quality, led by trained quality managers.

    2) Traceability
    In the unlikely event that a ruggedized system fails due a faulty part, traceability is a critical tool in identifying the root cause and remedying it. This is especially true if the issue affects multiple units, which could lead to a recall. When the life’s journey of an inferior or defective component or raw material can be traced back to its origin, it becomes much easier to figure out what happened. It also facilitates stemming and resolving the issue permanently if the culprits are still being used in production.

    Five Benefits of Working with US-Based Manufacturers

    When dealing with military, aerospace and other critical applications, systematic traceability is non-negotiable. Know that traceability applies to more than just materials and components, it covers the entire manufacturing process, from design to delivery. So, how does all of this apply to US manufacturers? It is much easier to verify a company here in the States, than one that is overseas. Similar to the commitment to quality programs in the US, traceability is now hardwired into the DNA of countless reputable manufacturers.

    The traceability umbrella covers services as well. Let’s look at a big one: engineering. A major benefit of working with US-based companies is that many of them still do engineering either in-house or they partner with US-based product development firms. On the other hand, overseas manufacturers tend to pull from deep pools of talent, consisting of hundreds or thousands of people. While this offers them flexibility, it comes at a steep cost: if something goes wrong due to an engineering mistake, it could be exceedingly difficult, if not impossible, to find that engineer to help evaluate and remedy the issue.

    3) Ease of Doing Business
    Let’s face it: it’s generally easier to do business in your home country. You speak the same language. Of course, millions of people speak English all over the world, but it gets tricky when dealing with highly technical products and their specifications. In our world, the terminology is quite nuanced, and one little error due to a seemingly minor miscommunication can ultimately lead to failure. Absolutely unacceptable in a safety- or mission-critical application.

    Our facility in East Hartford, CT is open for visitors
    Also, there is the challenge that time zones create. In the continental US, there are only four meaning that you can communicate much more quickly and efficiently. Emails are responded to in the same day. For those of us that still occasionally pick up the phone, people will be there to answer. In-person and virtual meetings can be scheduled with greater flexibility. Let’s face it, nobody wants to do a video conference with a supplier at 3am local time.

    Here are a few other things to consider: For buyers that require plant visits and audits, it’s much easier to do that in the US, rather than travel overseas. In unfortunate situations where something goes horribly wrong, it’s much more difficult to legally pursue a company abroad than it is at home.

    4) Tighter Turnaround Times
    When purchasing products from US companies, you will usually get them a lot quicker than when sourcing overseas. It’s a simple matter of distance – it takes much less time to ship something within the US than to wait for it to arrive from overseas. Especially when considering that much of what comes from abroad is literally shipped, as in put on a boat. This means your material will take weeks or months to arrive at the port, get unloaded and trucked to its final destination.

    Shipping delays can disrupt international supply chains
    5) Lower Shipping Costs
    As an added bonus, shipping costs tend to be lower as well. Keep in mind that there are many more options when shipping within the US, than to the US. This competition helps keep freight prices in check.

    We’ve all experienced or heard the horror stories inflicted by the recent pandemic. Companies struggled to get shipments from global sources on time, if at all. Shipment delays extended from weeks into months. This affected manufacturers abilities to produce and ship product on time. Much of this could been avoided by maintaining local supply chains.

  9. Key Considerations when Sourcing a Sealed Monitor

    Leave a Comment

    For most monitor applications, a standard off-the-shelf unit gets the job done. But there are certain situations that require a sealed monitor to protect it from the elements. Uses vary from car washes, boats, grain silos, mining, to explosive and other extreme environments. You get the idea. But it’s not just as simple case of: do I need a sealed monitor or not? There are several considerations that need to be made.

    Ingress Protection (IP)
    Ingress Protection (IP)
    This IP-67 rated Barracuda display isn’t just protected from water, but sand and dust too.

    Perhaps the biggest factor is the required IP (Ingress Protection) rating. This is a number that grades the resistance of an enclosure against the intrusion of dust or liquids. The operating conditions and threats within will ultimately dictate the required rating. This leads to a few questions: What is the monitor being protected from? Perhaps water? If so, simply splash proof or water-tight? How deep might it go? And for how long? Will it be exposed to dust? Should it be atmospheric resistant? Even here, you need more than simple yes or no answers. For instance, a monitor protected from sand would require a different degree of protection for one exposed to say flour, or other fine airborne particles.

    Altitude/Depth

    It the monitor is operating under water, maximum potential depth needs to be known. The deeper it goes, the higher the pressure will be, and the more vulnerable the monitor becomes. Those going way down would require more durable construction and sealing materials and methods. For instance, the solution may include either an overlay, or reinforced glass.


    A surface vessel (like the combatant craft pictured here) has very different requirements than a submarine

    Higher altitudes, or frequent changes in altitude, pose similar challenges. If you’ve ever been on an airplane and had your ears pop, you’ve experienced the impact. Frequent changes in altitude will typically cause condensation, which is a big nuisance, and can be prevented if configured accordingly.

    For monitors that will spend much of their life plunged into the depths of the sea, cooling becomes an issue. If they are used in cooler water, than water itself could cool it. But in a sealed environment, you don’t have the luxury of bringing in cool air.

    Heat Dissipation

    When dealing with a sealed system, the internal ambient temperature rises by about 10°C, which makes sense. By eliminating airflow into a system, you’re also eliminating opportunities for heat to escape. This presents an issue because too much heat can put internal electronics at risk—despite efforts to optimize convection cooling through internal heat sinking, circulating fans, and low power electronics.

    To mitigate this, there are a number of options available, each with tradeoffs that should be considered depending on your application. For example, ambient light sensors can be installed to adjust display brightness dynamically, reducing the potential for operators to leave the display on full brightness which creates more heat. Thermal sensors can also be installed that communicate with intelligent backlight controllers that automatically lower the brightness when the internal heat reaches dangerous levels. These options can prevent critical failure of the electronics—but may require an override feature for critical situations such as combat.


    Our 90-3064-023 is equipped with an intelligent power management controller to reduce temperature

    Another option is an intelligent power management controller. This can limit the amount of power a system consumes and appropriates power based on a priority list. For example, some of our sealed displays are limited to 12W of power consumption at a given time. These units can be programmed to prioritize power for a backlight and only supply the remaining allowable power to be used for something like an LCD heater.

    We can also use transflective or genflective panels rather than high brightness backlights. Reflective backlights allow for the display to be operated in direct sunlight without requiring the extra power (and heat) to keep the display bright. However, overlays such as touch screens or EMI filters prevent the optical transmission of light, meaning this option is only appropriate in specific applications.

    How “Extreme” Is the Environment for the Monitor?

    A monitor in a car wash is one thing, but if it’s on a spacecraft and heading to the moon, that’s completely different. The car wash application can likely be satisfied with a standard sealed monitor. But one that’s going into space, or that will be employed in other exploration, military or defense applications, might need to customization. For some of these use cases, requirement may exceed those of the highest IP rating. This means additional customization, and a larger price tag. But as we discussed in a previous blog, the investment is worth in the long run since cutting corners could get the user in trouble. In one case, the military was using monitors that were rated for a certain depth. They were operating as intended, until the boats sunk to a level much deeper than the rating covered, resulting in a failure.


    Sometimes being sealed isn’t enough—your monitor needs to be rugged, too

    Other considerations must be made that would determine the optimal sealing for the monitor. For example, salt water environments are much more corrosive than freshwater. Other environments that contain corrosive materials, or other factors such as fog and steam, need to be addressed accordingly as well.

    Cable and Connector Considerations

    If your display is submerged, all the connectors also need to be sealed to prevent corrosion

    As the old saying goes, a chain is only as strong as its weakest link. In the case of sealed monitors, all interconnects must be properly sealed. This includes the cables themselves and of course their connection points. If the cable isn’t sealed, it will eventually corrode in many situations, thus damaging the unit or rendering it useless. This goes for speakers as well, built-in or aux. They need to be fully sealed for the same reason.

    Are You an Expert Yet?

    As you can see, there’s a lot to consider when specking out a sealed monitor. Though first, you have to know if you actually need one. Over the years, we’ve learned that a lot of buyers don’t know when they do. In many cases, they will request a sealed monitor when they don’t actually need one. The opposite holds true as well. To avoid over- or under-specking your monitor, be sure to speak to with an expert before placing your order.

  10. Understanding Key Options for Ruggedized Displays

    Leave a Comment

    When it comes to configuring ruggedized equipment, there’s no shortage of options. In fact, our modeling system provides over 120 million configurations in total! That’s a lot to cover. So today, we’ll focus on the primary attributes available for ruggedized displays.

    Physical Size

    Ultimately, this will drive a lot of your decision making. In order to determine this, you need to start by answering a few basic questions: How are you using it? How much space do you have? How big do you want or need your display to be considering these things? The biggest tradeoff when it comes to size is portability vs total available viewing area. In other words, smaller displays will be much easier to transport due to their smaller physical dimensions and lighter weight. But they will not offer the viewing area that a larger screen will provide.


    In military applications, rack space is often a limiting factor when choosing a display size (U.S. Navy photo by Mass Communication Specialist 3rd Class Vance Hand/Released)
    Resolution
    The concept of resolution is simple: the more dots you have, the higher quality image you will have. The good news here is that resolution is usually not a major cost driver. But there are some other factors to consider. For instance, resolution options may be limited for small volume orders. Further, not all resolutions are available in all sizes. For example, if you are looking for 1920×1200, which is an HD output with a bottom bar, it’s only available in a few specific sizes.
    Inputs
    This is another critical aspect of specifying the right display. Ultimately, the required inputs are determined by other system components and their configurations. This includes your video source outputs. What are they? Will they work as is? Can they be converted? And don’t forget about power. What’s available? Can the display be configured to receive existing power?
    Environmental Requirements
    Ruggedized equipment wouldn’t exist if it wasn’t for all of the challenging factors and threats found in unfriendly, harsh and extreme environments. But before we look at those, let’s consider the basics. Where will the display live? Countertop? Desk? Wall? Will it be a standalone component with its own enclosure or part of a much larger system?
    Ingress Protection (IP)
    Outdoor environments typically require a display to be both rainproof and sunlight readable. Some applications (like charging stations) also require protections against Electromagnetic Interference (EMI).
    Now the fun stuff: environmental factors such as temperature, humidity, and altitude. Then there’s exposure to the elements and threats such as salt, fungus, or blowing sand. Will it be in a calm office or in the middle of a battlefield? When taken into consideration, the responses will determine factors such as glass coatings, sealing requirements, impact resistance and more.
    Human Interface/Interaction
    These days, we can do a lot more with monitors than just look at them. We can interact with them. So how exactly will your users be interfacing? Is touch screen required? Do you want to go hi-tech and use gesture control, or will buttons do the trick? Environmental factors will limit your options here. For instance, if you are in a high vibration environment such as a tank or helicopter, a touch screen will bring more frustration than utility.
    Other Custom Features
    We’ve touched on all of the major options for a ruggedized display. But there are plenty of other customization options. These include things like audio preferences. Will you be using speakers, or perhaps headphone jacks? Other customization is available as well, all depending on what’s needed.

    Ultimately, your final selections will be dictated mostly by where and how your ruggedized display is being used. While we do a lot of work for mission- and safety-critical military, medical and transportation applications, we do handle some fun commercial ones as well. One story that illustrates how use drives attribute selection is something you wouldn’t likely expect: a rickshaw used in a humid sub-tropical environment. Space limitations means the display must be small, but since the driver is reading maps on it – it needs to clear and legible at all times. When it’s raining. When it’s soaking up the high noon’s intense sunlight. And if that sunshine turns to rain, it needs be ready for that as well. And don’t forget it’s in a rickshaw, so its subject to some pretty intense vibration and occasional jolts. Our solution focused on selections for optimal readability, durability and stability. Thus making the overall experience more pleasurable and even safer for all.

Ready to Start Your Project?