Craig Glidden, General Motors Executive Vice President and General Counsel, will join LegalRnD on October 11th to speak about how corporate legal departments have changed over time and the implications for lawyers and law students. He is well known as a legal industry as both a leader and an innovator.

Mr. Glidden argues that the evolution and influence of corporate legal departments in shaping the modern legal services industry is too significant to be ignored by law students or law schools.[1] In his talk at MSU Law, he will discuss technology, innovation, and how they are changing legal practice. LegalRnD students are familiar with his writings on data-driven law practice, such as using decision trees, comparing “intuitive” and “quantitative” models for decision making and managing legal risk.[2]

At GM, Mr. Glidden leads a team of attorneys and professionals who serve GM’s regional and functional teams in more than 30 countries. Before joining GM, he was chief legal officer for LyondellBasell, one of the world’s largest plastics, chemicals and refining companies. According to GM CEO, Mary Barra, “Craig Glidden has had a distinguished career managing complex legal issues around the world, and his broad legal and senior management expertise fits perfectly with our strategic priorities and plans for global growth.”[3]

A recent article identified MSU Law LegalRnD as the number one law school at the forefront of emerging technologies.[4] LegalRnD’s engagement with the legal industry and hosting of industry leaders like Mr. Glidden is one of the reasons why.[5] We are incredibly excited to welcome Mr. Glidden and look forward to seeing you there!

Faculty and students, please order your free ticket for this event at:



[1]Glidden, Craig B. “The Evolution of Corporate Legal Departments.” Florida State University, Florida State University Business Review, 28 Feb. 2012.

[2]Glidden, Craig, Robertson, Laura and Marc Victor. “Evaluating Legal Risks and Costs with Decision Tree Analysis.” Chapter 12 in Haig, Robert L. Successful Partnering between inside and Outside Counsel: Advice from the Experts, West, 2016.

[3] “Craig Glidden Named GM’s New General Counsel.General Motors, General Motors, 15 Feb. 2015.

[4] O’Keefe, Kevin. “Michigan State College Of Law Ranks Number One.” Above the Law, Above the Law, 30 Aug. 2017.

[5] LegalRnD – The Center for Legal Services Innovation at MSU Law

There is a great deal of unknown when it comes to the future of the military as it converges with technology. As the fear mongers and futurists become more divided, vast gaps between the two continue to go unaddressed. Armed forces worldwide have been utilizing artificial intelligence (AI) for decades and it recently has allowed the United States (U.S.) to facilitate development in a way that it formerly has not, allowing us to best situate ourselves in a hyper-competitive environment. AI’s vital role in the military will only increase in the future. As a country, we should embrace the ever-changing market by staying informed rather than remaining stagnant in our former ways. Nonetheless, will AI lower the threshold of war?

Luminaries such as Elon Musk, Stephen Hawking and Nick Bostrom all have warned against emerging technologies, especially AI as they believe that it may bring the apocalypse. In the book, Superintelligence, Nick Bostrom states that once machines surpass human intelligence, will have the power to mobilize and decide to eradicate humans entirely extremely quickly using any number of strategies.[1] He believes that after AI destroys our existence the world will be “a society of economic miracles and technological awesomeness, with nobody there to benefit…a Disneyland without children.”[2] Musk has warned that people should be very careful about AI, as we are “summoning the demon”[3] and Hawking states that the “development of full AI could spell the end of the human race.”[4]

Hesitation Becomes Reality

For centuries, military weaponry has evolved and this result was inevitable with the technological advancements made by engineers and researchers. Just as the Industrial Revolution stimulated the development of powerful and destructive machines such as airplanes and tanks that abridged the role of individual soldiers, AI is permitting the Pentagon to restructure the places of man and machine on the battleground. This is happening in the same way AI is renovating everyday life with computers that can speak, see, and hear and cars that are autonomous meaning they can drive themselves.

Almost undetected outside defense rings, the Pentagon has made AI the focus of its strategy to preserve the United States’ place as the world’s most dominant military power. The government is “spending billions of dollars to develop what it calls autonomous and semi-autonomous weapons and to build an arsenal stocked with the kind of weaponry that until now, has existed only in Hollywood movies and science fiction, raising alarm among scientists and activists concerned by the implications of a robot arms race”.[5] The impact of the rapid expansion of the commercial market on autonomous systems development cannot be overstated. Hence why the Pentagon’s latest budget outlined “$18 billion to be spent over three years on technologies that included those needed for autonomous weapons”.5

Though military leaders say that we are approximately ten years from experiencing this technology first-hand, weapons programmed to kill without any reference to human authority are becoming a reality. Machines integrated with AI capable of using lethal force without a human interference already exist. The United States military has developed pilotless aircraft, unmanned tanks, autonomous drones and robots capable of selecting, shooting and destroying their own targets. Robotic fighter jets that would fly into combat alongside manned aircraft are currently being designed. The Department of Defense has tested missiles that have integrated AI make the decision on what to attack, and it has built ships “that can hunt for enemy submarines, stalking those it finds over thousands of miles, without any help from humans”. The internal workings of the war machinery have evolved since the sixteenth century in response to new technologies and the U.S. military already uses a plethora of robotic systems on the front-line, from bomb disposal robots to reconnaissance and attack drones. The difference to note is that those are remotely-piloted systems, therefore a human always has a high degree of control over the mechanism’s actions.

So, what do you think, will AI lower the threshold for going to war?

Interested in continuing this conversation? Reach out to me on Twitter @Anita_Western! 


[1] Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford UP, 2016. Print.

[2] Cellan-Jones, Rory. “Stephen Hawking Warns Artificial Intelligence Could End Mankind.” BBC News. BBC, 02 Dec. 2014.

[3] McFarland, Matt. “Elon Musk: ‘With Artificial Intelligence We Are Summoning the Demon.’.” The Washington Post. WP Company, 24 Oct. 2014.

[4] Mosbergen, Dominique. “Stephen Hawking Says Artificial Intelligence ‘Could Spell the End Of The Human Race’.” The Huffington Post., 02 Dec. 2014.

[5] Rosenberg, Matthew, and John Markoff. “The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own.” The New York Times. The New York Times, 25 Oct. 2016.

Healthcare remains the top area of investment in artificial intelligence as measured by venture capital deal flow.[1] Artificial intelligence is what enables a digital device or “robot” to see and recognize objects or solve a problem that requires a level of intelligence. The primary aim of artificial intelligence applications within the medical field is to analyze relationships between prevention or treatment techniques and patient outcomes[2]. Programs with artificial intelligence are already developed and tested in many practices such as: medical examinations, treatment diagnosis and protocol development, drug evolution, personalized medicine, patient monitoring and care, etc. World renowned medical institutions and technology companies like the Mayo Clinic, National Health Service, IBM and Google have all created solutions to a variety of problems that are currently used in the industry. For example, IBM works with CVS Health on artificial intelligence applications on chronic disease treatment with Johnson & Johnson on analysis of scientific papers to find new connections for drug development.[3]

Hitting Home with Client Care, Literally

With new strides happening in the healthcare industry, people’s entire medical histories will be accessible to physicians, privacy protected and available to a variety of entities ranging from clinics to specialty hospitals. The result will be more beneficial doctor’s visits for the patient. This revolution will result in more widespread access to healthcare because if the patient is unable to or uninterested in visiting a clinic in person, they will be able to contact their healthcare provider on a smartphone, send a picture of video of their condition, and a computer will read the image or video and recommend how to proceed. Due to machine learning, artificial intelligence is better at pattern recognition than the human eye or brain so this avenue will result in improved patient care.

People with chronic conditions will have the option of in-home care by medical professionals who will virtually check-in with the patient. Healthcare professionals will have the ability to chat with remotely about data that they’ve received from implantable, wearable or external sensors. Those sensors will be constantly monitored by robots with artificial intelligence who will double as caregivers. With artificial intelligence integration, people who are sick will receive care for a smaller cost in a more relaxed setting.

What Happens When at Home Care Isn’t Enough?

            Hospitals will limit themselves to the diagnosis of rare or complex conditions or intensive surgeries. The goal is for hospitals to no longer have a designated ICU because every room will be a “self-contained ICU”[4]. Each room will have the ability to connect with remote specialists through built-in cameras for examinations, which will incur less expense for the patient as they will no longer have to travel to meet with the specialist. Hospital staffing ratios will vary according to the individual patient’s need as determined by artificial intelligence risk-monitoring and treatment algorithms. Physicians will also receive aid with diagnosis and evidence-based treatment by cognitive computing systems like IBM’s Watson [5].

Will It All Be Rainbows and Butterflies?

While artificial intelligence prevents many errors in the medical field, it is also causing new kinds of mistakes that the industry has not experienced before. An example of this is an overdose case at the University of California, San Francisco where doctors delivered a massive overdose of antibiotic to a 16-year-old patient. Though the error was safely remedied, the lesson was clear: there’s still a great deal of progress that artificial intelligence can make in the healthcare industry.[6]

Artificial intelligence is continuously thinking, processing and updating itself to ensure productivity, but that does not always mean that it will make the correct decision. Entirely remote patient care, a universal database for medical records and having every patient room at a hospital be an ICU isn’t going to happen tomorrow. That doesn’t mean that this type of technology boom is thirty years away either. There is no concrete number of years until this artificial intelligence integration happens, but it has so much potential and I believe this integration will reform healthcare completely, for the betterment of society.

[1] CB Insights Artificial Intelligence report. 28 June 2016.

[2] Coiera, E. (1997). Guide to medical informatics, the internet and telemedicine. Chapman & Hall, Ltd.

[3] Spear, Andrew. “From Cancer to Consumer Tech: A Look Inside IBM’s Watson Health Strategy.” Fortune, 05 Apr. 2015. Web. 2 Mar. 2017.

[4] Buchman, Tim. “The Smarter ICU.” Emory Medicine Magazine. Emory University , June 2015. Web. 06 Mar. 2017.

[5] Weber, David. “12 Ways Artificial Intelligence Will Transform Health Care.” H&HN. American Hospital Association, 28 Sept. 2015. Web. 05 Mar. 2017.

[6] Miliard, Mike. “Q&A: Robert Wachter on health IT’s ‘hope, hype and harm’” Healthcare IT News. HIMSS Media, 02 Apr. 2015. 3 Mar. 2017.

Only six years ago Watson, IBM’s computer system, beat former Jeopardy! champions on national television. Since, both the IBM designers and the machines which they develop have made great improvements and the impact on our everyday lives is not far away. After Watson won, IBM developed the ‘deep learning and natural language’[1] interaction to form “cognitive computing” a type of artificial intelligence (AI) that involves self-learning systems to mimic the way the human brain works inside of a robot. Today, Watson-powered applications are doing everything from helping in the healthcare field to assisting financial advisors plan our futures[2].

Will AI Overtake Lawyer’s Careers?

Many people believe that artificial intelligence is not close to taking over anything more than certain dull, everyday tasks, such as sorting data, however more mechanical and technology driven tasks are already outsourced to artificial intelligence. A recent example of artificial intelligence going beyond ordinary, simple tasks would be Casepoint’s CaseAssist case evaluation program. CaseAssist is one of the first artificial intelligence eDiscovery platforms that uses new and innovative programming to make the discovery process easier for the human user. It simplifies the process with technology-assisted review, a cloud collection of documents, and by proactively identifying and alerting case teams of potential hot documents, helpful search terms, important dates, and likely “junk” documents[3]. When the user starts to review the documents identified by CaseAssist, the system’s artificial intelligence and algorithms work to present more documents and emails that are automatically identified as potentially relevant to the litigation or investigation3. The user can then either accept or reject CaseAssist’s results, which will help the program ‘learn’ what exactly the user wants.

Artificial intelligence’s influence on the legal practice has become a pressing, present issue. Whether it is eDiscovery, practice management, or review of contracts, today’s newest programming uses artificial intelligence and machine learning. Artificial intelligence is breaking ground with many start-ups like Casepoint with CaseAssist, and mature companies like IBM with Watson. Programs such as CaseAssist will become more popular as we move into the future. CaseAssist works with the lawyers to determine the best way to proceed with the case. These programs powered by artificial intelligence helps the lawyers satisfy their discovery duties and helps them to make better informed, strategic decisions in both litigation and investigation. In an era where the cost of meeting discovery requirements can exceed what it may cost to resolve a case, identifying relevant information quickly is crucial to the lawyer’s success.

Therefore, yes, robots may one day take over many functions of lawyer’s jobs, but great lawyers separate themselves from average ones by providing clients a certain amount of wisdom, compassion, insight and rational judgement that robots cannot provide right now. Yet technology helps lawyers to work more efficiently, effectively, and enjoyably, so that they have extra free time to spend doing other things. Advanced technology and artificial intelligence will greatly transform lawyer’s jobs.

So, Can AI & Lawyers Work Together?

Right now, artificial intelligence cannot replace a lawyer’s job. Lawyers will still make the final decision about how to proceed in each case or transaction. While many lawyers tend to believe that artificial intelligence as a potential threat to their careers, with the belief that a robot can do their job faster, better, and cheaper, the best proactive approach is to ask whether artificial intelligence can give the firm an advantage over others.  Yet if artificial intelligence can help lawyers to make better informed decisions, we should welcome it with open arms. We should think about the artificial intelligence in legal practice as a cognitive assistant to help us learn, search, retrieve, and analyze information[4]. It has the potential to make an attorney more efficient and “to the extent attorneys perceive a threat to their practice, it may be because there’s been an inefficiency there.”[5] Artificial intelligence has the potential to advance legal research to new heights by combining humans with computers, thus supporting their joint performance.

[1] “Deep Learning in Natural Language Processing.” The Stanford Natural Language Processing Group. Stanford University, n.d. Web. 04 Feb. 2017.

[2] Olavsrud, Thor. “10 IBM Watson-Powered Apps That Are Changing Our World.” CIO. IDG, 06 Nov. 2014. Web. 07 Feb. 2017.

[3] Dungarani, Amit. “Casepoint Announces the Release of CaseAssist, the First Artificial in.”PRWeb. PRWeb, 31 Jan. 2017. Web. 04 Feb. 2017.

[4] Garg, Rahul. “Combining Natural Language Classifier and Dialog to create engaging applications .” IBM Watson. IBM, 28 Sept. 2015. Web. 06 Feb. 2017.

[5] Sohn, Ed. “ Can Computers Beat Humans At Law?” Above the Law. Breaking Media Inc., 23 Mar. 2016. Web. 05 Feb. 2017.

Recent Federal Aviation Administration rules authorizing routine commercial-drone flights also establish legal precedents that could affect an assortment of future air-safety regulations. Laws and regulations applicable to drone flights are almost entirely federal since the federal government “has exclusive sovereignty of airspace in the United States”[1] and the Federal Aviation Administration (FAA) sets all standards for flight safety. Drones are being used in more ways than ever before by being deployed on search and rescue missions, rapid blood and transplant organ delivery and they’ve advanced high-definition 3-D topography mapping. They can be programmed to flydrone the same ultra-precise route repeatedly, enabling automated up-close inspection of power lines, bridges, pipelines and wind turbines, as well as real-time, multi-sensor mapping of farms right down to individual plants.[2] According to industry estimates, the Small Unmanned Aircraft Systems (UAS) Rule has the potential of generating more than $82 billion for the U.S. economy and create more than 100,000 new jobs over the next 10 years.[3]

The UAS Rule requirements are intended to minimize risks to other aircraft as well as people and property on the ground. The new regulations allow personal and commercial flight of drones weighing less than 55lbs., operated during daylight, within the visual-line-of-sight of the pilot (using a drone’s camera does not satisfy this requirement), with a maximum airspeed of 100mph and maximum altitude of 400 feet above ground level.[4] The small UAS can be operated from a moving vehicle if the operation is over a sparsely populated and it is only operated over people directly participating in the activity. Flights under covered structures or at night are prohibited, as is careless or reckless operation of the drone. Transportation of property for compensation or hire is legal so long as the aircraft, including its attachments and cargo weigh less than 55 pounds total, the flight is conducted within visual line of sight (not from a moving vehicle or aircraft) and the flight occurs wholly within the bounds of a state, excluding Hawaii and the District of Columbia. Many of these restrictions are waivable if the operator demonstrates that appropriate safety measures will be used consistently.

Drone operators will need to be cognizant of the new regulations to avoid preventable legal action. An individual operating a small UAS must be at least 16 years old and must hold either a remote pilot airman certificate with a small UAS rating or be under the direct supervision of a person who does hold a remote pilot certificate. To qualify for a remote pilot certificate, an individual must either pass an initial aeronautical knowledge test at an FAA-approved knowledge testing center or have an existing non-student Part 61 pilot certificate. If qualifying under the latter provision, a pilot must have completed a flight review in the previous 24 months and must take a UAS online training course provided by the FAA. The TSA will conduct a security background check of all remote pilot applications prior to issuance of a certificate. While an FAA airworthiness certification is not required, the drone operator must conduct a preflight visual and operational check to ensure the drone’s safe operation.

The FAA is expected to expand upon on these rules in the future to permit a superior range of operations. Therefore it remains crucial that companies as well as drone manufacturers stay up to date on the developments occurring in the field of drone regulation.

[1] 49 U.S.C. §40103(a)(1)




Written in July 2016 for PS&E Law Firm

Full article can be found at:

I wanted to kick my blog off with a fun post about an application that has been downloaded 500 million+ times to date and is taking over cell phones worldwide. Enjoy!

Pokémon Go has become the biggest mobile game in United States history attracting over 21 million active daily users and over $268 million in revenues since its launch on July 6th, 2016. Yet the handheld game has also brought forward concerns about how exposed our personal information can be in the hands of seemingly nonthreatening applications. To play Pokémon Go, gamers use their smartphone’s GPS to find, capture, fight and train virtual creatures superimposed on the real world shown by their camera. Players can purchase items to advance the game, including coins, eggs and incubators.


The game originally requested permission on the players smartphone not only to use a player’s device camera and location information, but also to be granted full access to the user’s Google accounts — including email, calendars, photos, stored documents and any other information associated with the login. Though the developers claim that while Pokémon Go did not use any information from players’ accounts other than basic Google profile information, they still requested access, which raised red flags. The application’s privacy policy has since been updated, so it is suggested that players with an iPhone should log out and download the update from the App Store, as the updated application only allows access to the user’s name and Gmail email address. Players who installed the application on an Android device and logged in with their Google account only granted access to their Google username and email address from the start.

Though Pokémon Go may have never actually been interested in your emails, it is capable of tracking your location and has access to your IP address as well as the webpage you most recently visited before launching the application. The game imposes virtual graphics (Pokémon) over the real world; therefore the application needs access to maps and locations. Yet this can be accomplished without requesting access to the player’s personal information. Keeping this in mind, players who want to play the virtual real-world game can restrict access to their Google accounts by creating ‘Pokémon Trainer Club’ accounts that are specific to the application and do not request excessive personal information.

Clicking “yes” to application requests that pop up during installation on a mobile device can compromise personal privacy. In their terms and conditions, some applications have clauses which state that they will hand over data to law enforcement officials or other private parties to respond to legal requests, yet few people realize that since they do not read the ‘small print.’ If you are unsure about the permissions on your mobile device that you have previously approved, they can be checked on iOS (Apple devices) by tapping Settings and scrolling down for a list of applications and what these applications have access to, these applications can then be altered and evaluated individually. On Android, tap Settings then Apps under Device Settings then choose the application and tap Permissions to evaluate each individually. It is very common for users to have applications downloaded onto their device that put them at a similar risk of exposing personal information as the original Pokémon Go application.