• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Exploring the Ethical Implications of AI in Decision-Making

daviddelex

New member
Joined
Sep 24, 2024
Messages
3
Hello

As artificial intelligence becomes more integrated into various sectors; including healthcare; finance, & education ; I'm interested in discussing the ethical implications of AI-driven decision-making.

How do we ensure transparency and accountability in AI systems? What safeguards should be in place to prevent biases in AI algorithms; & how can we maintain human oversight in critical decisions? I have checked https://www.typologycentral.com/forums/science-technology-and-future-tech.16-aws developer reference guide but still need help.


I would love to hear your perspectives on the balance between technological advancement and ethical considerations!



Thank you !
david
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
Open source the data that is being used for the algorithms. That way we can scrutinize the source data. Today's AI are still ultimately fancy compression algorithms.

It turns out, fancy compression is pretty useful.
 

mgbradsh

Active member
Joined
Nov 6, 2008
Messages
354
MBTI Type
INFP
Enneagram
6w5
I had a really interesting discussion with ChatGPT about this.

I would say the idea of "ethics" in AI is kind of a veneer that would be used to make it palatable.

But honestly, it doesn't matter. Hackers and scammers are already using AI to write code, create more sophisticated emails, do a better job of baiting people into participating.

Worse than that, countries are using AI to monitor their citizens and affect citizens in other countries, including interfering in elections.

I guess if people are looking for some sort of moral high ground they can have it, but for what?
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
From Perplexity.ai:

Generative AI is being applied in various ways to assist and improve healthcare processes. Here are some key examples:

## Medical Diagnostics and Imaging

Generative AI is being used to enhance medical imaging and diagnostics in several ways:

- **Image Enhancement**: AI models can convert low-quality medical scans into high-resolution images, improving the ability to detect anomalies[7].

- **Disease Detection**: AI algorithms trained on medical images can identify early signs of conditions like skin cancer, lung cancer, Alzheimer's disease, and diabetic retinopathy[5].

- **Automated Segmentation**: Generative AI can automatically segment organs or anomalies in medical images, saving time for healthcare professionals[4].

## Drug Discovery and Development

Generative AI is accelerating pharmaceutical research:

- **Compound Generation**: AI models can propose novel chemical compounds with desired properties for potential drug candidates[3].

- **Target Identification**: AI analyzes biological datasets to identify potential drug targets and validate their relevance in disease pathways[4].

- **Clinical Trial Optimization**: AI can analyze historical clinical trial data to improve trial design and identify target patient populations[4].

## Personalized Medicine

Generative AI is enabling more tailored approaches to patient care:

- **Treatment Planning**: AI models can analyze a patient's medical history, genetic profile, and real-time health data to create personalized treatment plans[2].

- **Pharmacogenomic Insights**: AI can predict how individuals will respond to medications based on genetic factors, allowing for more precise prescriptions[4].

- **Virtual Health Assistants**: AI-powered chatbots can provide personalized medical advice and recommendations to patients[5].

## Administrative Tasks and Clinical Documentation

Generative AI is helping to reduce administrative burdens:

- **Clinical Note Generation**: AI can draft clinical notes and compile information for insurance preauthorization, reducing paperwork for medical staff[8].

- **Medical Transcription**: AI can transcribe and summarize patient consultations, filling electronic health record fields automatically[3].

- **Appointment Scheduling**: AI can optimize scheduling by analyzing patient needs and doctor availability[3].

## Medical Research and Education

Generative AI is also supporting research and training:

- **Synthetic Data Generation**: AI can create synthetic medical data for research purposes, especially useful for rare diseases where real data is limited[7].

- **Medical Simulations**: AI can generate simulations for medical training, such as scenarios for treating sepsis[5].

- **Literature Analysis**: AI can analyze vast amounts of scientific literature to identify research patterns and generate new hypotheses[5].

While these applications show great promise, it's important to note that most are still in development or early stages of implementation. Challenges such as ensuring data privacy, preventing AI hallucinations, and validating results in real-world settings need to be addressed as the technology continues to evolve[8].

Citations:
[1] Generative AI in Medical Practice: In-Depth Exploration of Privacy ... https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10960211/
[2] Generative AI Healthcare: 15 Use Cases, Challenges & Outlook https://research.aimultiple.com/generative-ai-healthcare/
[3] Generative AI in healthcare: Use cases and challenges - N-iX https://www.n-ix.com/generative-ai-in-healthcare/
[4] Generative AI in Healthcare System and its Uses | Complete Guide https://www.xenonstack.com/blog/generative-ai-healthcare-system
[5] 9 Innovative Use Cases of Generative AI in Healthcare https://saxon.ai/blogs/9-innovative-use-cases-of-generative-ai-in-healthcare/
[6] Generative AI Use Cases in Healthcare - Netguru https://www.netguru.com/blog/generative-ai-use-cases-healthcare
[7] Generative AI in Healthcare: Top Use Cases - ITRex Group https://itrexgroup.com/blog/top-generative-ai-in-healthcare-use-cases/
[8] Science & Tech Spotlight: Generative AI in Health Care | U.S. GAO https://www.gao.gov/products/gao-24-107634
[9] Tackling healthcare's biggest burdens with generative AI https://www.mckinsey.com/industries...ealthcares-biggest-burdens-with-generative-ai
 

Lark

Well-known member
Joined
Jun 21, 2009
Messages
29,682
I think it'll be interesting what AI will achieve, I would not be surprised if it turned out to be more limited than hoped for (or dreaded) to be honest.

There are A LOT of "animal spirits" or ideological biases in all those fields you mention which I'm unsure that AI will correct, it could "break" the AI trying to understand how those same decisions or biases exist in the first place.

I would be very interesting to see how finance would work without much of the gambling and pseudo-fraud that exists at present, this is what I think Gates wrote about in a really early book about computer techology as "capitalism without fiction".

In health there still exist incredible biases, the principle of less eligibility exists in one shape or another, whether its rationing by pricing most of public out of medicine, treatment or care or some combined assessment by medical professionals ruling people in or out in the same way, usually with waiting times.

In education, well, education for most is schooling, which is trapped in a model of life mirroring factory life and is largely about freeing up the adult population to be available as part of the workforce.
 

mgbradsh

Active member
Joined
Nov 6, 2008
Messages
354
MBTI Type
INFP
Enneagram
6w5
Ethically, I think the issue in medicine would be who owns the results? Does the patient own them? The hospital? The doctor? The company that owns the AI?

There are a lot of fantastic ways AI could be used to improve medicine, from accuracy in diagnosis, to supporting practitioners in recording patient information, but I think the worry would be access.

Would it be too expensive for the average person? Would an insurance company reject it? Would companies sit on results, much like drug companies do with patents?

I think the best case scenario is that AI improves patient outcomes across the board, but it seems like there are people heavily invested in those outcomes not coming to fruition.
 

mgbradsh

Active member
Joined
Nov 6, 2008
Messages
354
MBTI Type
INFP
Enneagram
6w5
I have a friend who is terrified of AI. He may have watched Terminator too many times and is sure AI will turn on humanity and wipe out society.

I believe his fear is a common one.

I heard something interesting. It was from an Indigenous activist in Canada. They use AI, rather effectively, as a database to capture and restore their language. They said they thought the settler fear of AI was a bit funny. They understood it, but it was funny. The fear is basically that AI will wipe out their culture and their identify, but that's what happened to the Indigenous people across the Americas when the Europeans came, so having lived through that and come out the other side of it, it's not a fear they have any more.
 

ceecee

Coolatta® Enjoyer
Joined
Apr 22, 2008
Messages
16,334
MBTI Type
INTJ
Enneagram
8w9
I have a friend who is terrified of AI. He may have watched Terminator too many times and is sure AI will turn on humanity and wipe out society.

I believe his fear is a common one.

I heard something interesting. It was from an Indigenous activist in Canada. They use AI, rather effectively, as a database to capture and restore their language. They said they thought the settler fear of AI was a bit funny. They understood it, but it was funny. The fear is basically that AI will wipe out their culture and their identify, but that's what happened to the Indigenous people across the Americas when the Europeans came, so having lived through that and come out the other side of it, it's not a fear they have any more.
Nearly all of this fear is the fact that the industry is run by comic book villain wannabes and people who shouldn't have more than a minimum wage job. And a federal government full of kleptocrats that couldn't give less of a shit and don't plan oversight beyond a glance. The fear is common but not without reason.



 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
Nearly all of this fear is the fact that the industry is run by comic book villain wannabes and people who shouldn't have more than a minimum wage job. And a federal government full of kleptocrats that couldn't give less of a shit and don't plan oversight beyond a glance. The fear is common but not without reason.



It's more complicated than the stories let on. "AGI" has a strong eugenics root, both in it's goal setting and it explication about existential risks.


The AI conversation, when it comes to wages and worker support has to be focused on who owns it, controls it, gets it's benefits and who gets credit for it's productivity.

I don't trust any small group to control it. They all become "comic book villains," even if they weren't a bit like that to start.

The root of this fear is the subconscious eugenics drive around "AGI" as a goal.

I did a bit of analysis round "little tech" in that thread too.

If workers get new tools that they own and control, especially if everyone has it, the businesses don't make profit, they hire whoever is the best as using the new tools.

There are two ingredients to make that happen. 1) Everyone has to have the ability to affordably use these tools.
2) New and yet to be understood use cases have to be created by the workers themselves possibly outcompeting their old bosses in a new firm.

Graphic designers command better salaries than paper and pencil artists did.

Software engineers have been working themselves out of jobs since it's inception(and often "AI" is the buzzword in a new way). We no longer use punch cards or require PhDs in mathematics for that job, but each new set of tools make them more productive.

There have been a couple rounds where the new things have been locked-up by big entities. The cloud and mobile are the rounds I am talking about. There the corporate overloards locked up the automation value.

We get fatalistic, and think this is the only way it can happen.

We need ensure everyone can have it, and that people can compete in ways that old bosses cannot dictate that new firms can make them irrelevant.
 

The Cat

The Cat in the Tinfoil Hat..
Staff member
Joined
Oct 15, 2016
Messages
27,394
"Guess the next word"(LLMs/Transformers) isn't killing the writing industry. It's the people who think that this is sufficient that are killing it.
Would you mind providing elaboration as to your meaning by the bolded. I wish to be certain I understand you correctly. Please and thank you.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
Would you mind providing elaboration as to your meaning by the bolded. I wish to be certain I understand you correctly. Please and thank you.
Anyone who thinks about hiring a writer, but then decides to rather use an LLM instead.
 

The Cat

The Cat in the Tinfoil Hat..
Staff member
Joined
Oct 15, 2016
Messages
27,394
Anyone who thinks about hiring a writer, but then decides to rather use an LLM instead.
We're just gonna have to agree to agree on this one. That is indeed the problem.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731

As I've mentioned in other places, the core of this wave of AI comes from the use of math in high-school and early college (for most people in the US, but many learn when they are much younger)- Large numbers of matrix multiply-add operations, some bounds checking, some exponentials, and a bit of multivariate calculus. Largely, if you use it, you can do it without having deep knowledge of the math.

It's domain of application is as general as math.

That's why the core of it for ethical and global equity reasons has to be in the public domain.

The quadratic equation can be used for ballistic calculations. But denying that people can learn it until they register their learning of it with some government entity seems draconian.

If you want to apply it for certain domains of application, then yes, regulate that it be deployed safely. Go after the deep fakers, the scammers, the ai snake oil sales people who claim results that aren't there.

But advocating that these better and better "guess-the-next-token" calculators should not go into the public domain is actually the thing that'll cause rampant inequality.

Then only the people who are cronies with the predominantly white, European heritage folks, will be allowed access.
 

mgbradsh

Active member
Joined
Nov 6, 2008
Messages
354
MBTI Type
INFP
Enneagram
6w5
Nearly all of this fear is the fact that the industry is run by comic book villain wannabes and people who shouldn't have more than a minimum wage job. And a federal government full of kleptocrats that couldn't give less of a shit and don't plan oversight beyond a glance. The fear is common but not without reason.



I like that, comic book villains. Although, I feel like government is more of a Plutocracy, or Kleptocracy through Plutocracy.

You're right about government being incredibly far behind. I think government operates through fear. Not by spreading fear, but by acting in fear. They are afraid to act because they are afraid of the consequences of action.

I was thinking about this today and it feels like this shift will be to humankind what the Industrial Revolution was. It's going to change the course of humanity in ways we can't possible expect or understand (maybe AI can tell us?) and we're probably past the tipping point of being able to stop it.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
I like that, comic book villains. Although, I feel like government is more of a Plutocracy, or Kleptocracy through Plutocracy.

You're right about government being incredibly far behind. I think government operates through fear. Not by spreading fear, but by acting in fear. They are afraid to act because they are afraid of the consequences of action.

I was thinking about this today and it feels like this shift will be to humankind what the Industrial Revolution was. It's going to change the course of humanity in ways we can't possible expect or understand (maybe AI can tell us?) and we're probably past the tipping point of being able to stop it.
We were past the tipping point since the invention of the Turing Mahine-the first time similar discussions we had about AI.

What we can do is determine direction, control, benefits, and things like that.
 
Top