So today we're talking, I was told to talk about costs so we're gonna talk a little bit about costs. We're going to talk a little bit. So it's going to change a little bit I'm gonna talk a little bit about methodologies when we talk about cost and what they mean and how they impact the conclusions we come to. So there are three main points that I'd like to make today. One is different perspectives require different costs.
Second, some cost data are not available, unless through private data collection. And the third is primary data collection can provide important insights. Several common cost perspectives and you see that so there are several different types of perspectives, there are societal which means everybody every single cost is taken into account. Then there's the government what public funds go in to pay for stuff there's patient or client cost and there's caregiver. When we talk about societal we are actually talking about government, patient, caregiver, everything, all the costs that go into making providing services. Now government costs are oftentimes divided into two primary categories. There are the health costs and the non health costs. So oftentimes even though people say if you looked in a text book and you asked a health economist what perspective you should take it? They would say societal, but when you pick up a research paper or report you often don't see societal costs.
What you actually see is government costs and of the government costs you're probably going to see health. So, it's kindof like when you go shopping you buy it and you go to the grocery store you get charged for the unit of things you buy so gallons of milk and then you also have to figure out how much each of those gallons costs then you come up with the total costs. Now, that's ok if the things that we're counting are the main things. So when we talk about health we often talk about hospital, emergency, and if we're lucky we'll get physician costs. But is that all the costs for mental health? Well if you don't account for the other things basically what you are assuming is the cost is zero.
And the thing that usually gets left out are community programs. Are the costs of community programs 0? No, so that's a problem, isn't it? So, at least in Canada we don't have administrative data for community mental health because community mental health is provided at a local level. So there's no consistent way that the data has been collected. So if you want that data you're going to have to go collect it. And there's some costs that are hard to find, aren't there? Community service is one of them but the other non health costs that are health like in a shelter people receive services if they're in jail or prison they also sometimes receive health services and if you have a new intervention, a new program, oftentimes the costs aren't collected systematically for them.
So when comparing costs between new and old way of doing things we are actually testing whether the total cost of the new thing is equal to the old thing. Or the difference in the two things are 0. So what if you take a societal perspective but it's not really a societal perspective and you only use accessible data so you're only going to use hospital, emergency and maybe physician.
What happens? You're assuming that your new program doesn't have any effect on the other non health costs, it has no effect on your clients and it has no effect on the caregivers. But that may be a problem for mental health. Is mental health only provided by the health care system? Is it only given in the hospital? No, it's a problem because mental health isn't exclusively physician or hospital based it's provided in the community. It's also given in prisons it also includes contact with justice, it includes shelter costs, there's a lot of stuff that if we don't talk about those other things we might come to a different conclusion. So what can primary data collection reveal? I'm going to take a case study and this is from Canada. Now before you turn off, the province of Ontario infused new money after having frozen the community mental health budget for 12 years. Then it decided to invest new money and then the ministry wanted to know well, if we invest new money does it have any effect? So we thought okay, let's look at continuity of care and we looked at two different specialized programs, we looked at court support programs and early intervention programs. For those of you don't know what Ontario looks like we're above the Great Lakes and we did a multi-site study so we were in six communities in Ontario.
Now it looks like we've neglected the North. Actually not very many people live up there. So this is Thunder Bay and they have to take care of that entire area which is about the size of France. Most of our population is in Toronto and a bit in Hamilton.
But, if you would say what did it look like? It was basically a rural program. Now, all of our early intervention programs that were included, there were six of them, follow the guidelines for the International Early Psychosis Association and our ministry also has a set of program guidelines and standards for early intervention. Now, that took five years for them to develop but it's helpful for the programs to know what they're trying to what they're aiming at and we do have EPION which is EPI Ontario. So we've managed to create a network of the 56 early intervention programs serving Ontario and just, I guess this is a commercial. It's been very helpful because the programs meet monthly and they're able to leverage funds to do education that they couldn't have done because they're smaller programs and to bring speakers together and to I guess set standards together and to talk to the ministry. So for this case we're gonna look at two groups those enrolled in early intervention for greater than 12 months and those who were enrolled for less than 12 months. So the question is, is there a difference in the use of services and supports based on length of involvement and early intervention. Now why would we do that? Part of the reason is, news because for a decision-maker the budget is twelve months.
So oftentimes the interest is in what happened during the 12 months and this also might be informative because if you're looking to see for a time horizon are the people who've been in the program for a longer time than 12 months do they use the same amount of services? If not, it's a good rationale to invest in the program if we can see that they're using less services over the long run. Ok, so we collected information on hospital, medication, physicians, community support, housing and legal and what did we find? So here's our main slide. If you only look at hospital, emergency and physician it looks like the people with 12 month, who are there longer cost more on average.
But if you include the community mental health, that difference decreases and if you include the non mental health like legal contacts, social services, housing. That difference halves from what it was to begin with. And then when you include patient and insurance costs it switches totally.
So what you might have started off saying that yeah, ok, people who have been in there longer cost more when you take into account all of the costs they actually cost less. So do caregiver costs, do they matter? Yes, we found out of a survey of the caregivers in this study they spend an average of $2,000 a year, even in a public system they still contributed to the care of their family members. So, what do we conclude? If you don't actually test it, you won't know if there is a difference. And these they're useful services and supports that aren't typically collected by an administrative data system--medication, insurance community mental health services and caregiver contributions. And if we don't advocate for collection of data there won't be resources for it. So we have to advocate for that, if we don't collect the data that allow for costing estimation we won't know what we're missing. And including a wide range of perspectives acknowledges that mental health has affect not only in hospitals and emergency but it also recognizes the contribution of other sectors and family and the clients.
I was asked to talk to you about how Maryland developed its outcome measurement system. It's a statewide system of reporting mental health, behavioral health measures statewide. So that's the main text. There are several subtext however including some comments about the importance of state mental health authorities and it relates to some of the concerns that Darrell Steinberg expressed in his remarks to us at midday.
And it also talks about the importance of community participation in the development of policies and policy instruments which we'll hear more about later in the day from Ken Wells and his colleagues. At the University of Maryland through this very close association of the State University with the single state agency, I oversee an enterprise called the Behavioral Health Systems Improvement Collaborative. It's kind of a mouthful, it was originally the Mental Health Systems Improvement Collaborative, but in the last two years we've gone through an integration including the services for substance related disorders, alcoholism in a single state agency that combines them and so in reflection of that as an instrument of the state agency at the State University we changed our name to Behavioral Health Systems Improvement Collaborative.
There are three entities that are part of the collaborative. One is a training center that provides these kinds of training statewide for special interest groups sometimes on suicide prevention sometimes an all state meeting. It includes an Evidence-Based Practices Center which provides initial training and ongoing monitoring and improvement activities for evidence-based practices throughout the state. We have a set of standards, fidelity standards for a range of evidence-based practices and we train to them and we hold the programs accountable to those statewide standards.
In return, they receive a higher rate of compensation for adherence to those fidelity standards. And then the third entity is called the System's Evaluation Center, it provides policy analysis work and developmental work and it was the creator of our statewide outcomes measurement system and here I'd like to credit the director of that center Diana Seybolt who's been the overseer and the midwife for this particular project which has taken many many years. What motivated the development of this statewide outcomes measurement system where the concerns raised by the budget analyst who was responsible for presenting to the state legislature activities in the mental health and behavioral health sector and it became clear that there was a drive in Maryland a decade ago to do more of what is now referred to as performance-based management. And so the concerns of legislature, the legislature representing the populace of Maryland we're concerned what are we getting for the investment that we make in behavioral health services? And what evolved as the mechanism for holding this the programs in the state accountable was the development of this outcomes measurement system in all of the outpatient mental health clinics in the state.
Now lest you think that what happened is that the great University in downtown Baltimore imposed almost immediately a set of outcome measures from our panoply of research and research dependent measures you would have to think again. In fact it took years for us working with all of the stakeholder groups in the state and the budget analyst people in the legislature and in our single state agency to develop a list of very simple and straightforward measures that concern the general public. So our outcomes measures include things like, what is your residential status? Are you housed? Are you un-housed? How stably housed are you? What's your participation in the workforce? Or for children, your participation in education? What contact if any have you had with the criminal justice system? These are big picture, large socially motivated outcomes. But yes, we also are concerned about symptomatology and distress and we turn to the basis 24 after reviewing for quite a long time with all our stakeholders what were the domains of interest and importance and then deciding on an instrument that had been used in public settings that had properties that we like to see of reliability and validity and that was why we selected this particular measure. Each person who enters the outpatient mental health clinics in Maryland are required through an interactive process between the clinician asking these questions that are part of the outcomes measurement system through that interaction the clinician enters it into a system of authorization for services from our medicaid system.
So participation and data are at a hundred percent. You don't get paid if you don't do the outcomes measurement system. After initial authorization, every six months there's an update and I want to stress that this is a report from the service user through the interaction with the clinician. So it promotes that kind of relationship to develop the data. Then it's entered into the system.
When an individual separates from services then there is outgoing or discharge review of all of the same questions. From the compilation of all of these data we have what we call a data mart which makes all of these data not only available to the state legislature which was the intended recipient but available to all of the organizations. We have what we call core service agencies our local mental health authorities they have access to all of the profiles for all the providers in their county area. The state has access to all the state data and individual programs can look at how they compare with the other programs within their core service agency. So what we have here is a really constructive balance of two types.
One is between the scientific expertise that the university offers to the states through this ongoing relationship. Now for the better part of thirty years we've had this relationship. We provide this expertise but it's balanced by engagement with the community and so that when we do studies or when we propose the design of instruments for data collection they reflect the community of interest in what they think are appropriate measures. The other balance is between the local mental health authorities, which are responsible for oversight of mental health programs at the county level, and a very strong central state authority, which collects the data, imposes standards for data collection, and in some of the other activities that I described imposes standards of fidelity evidence-based practices, in exchange for which they provide a higher rate of service for doing higher performance-based care.
So we find that it's an extremely important relationship having, and granted a much smaller state than California, a very effective state mental health authority that still recognizes the local creativity and innovation at the county level. Now off the topic, just to conclude, I had one comment about the learning community. And that is, in Bob Heinssen's remarkable story this morning, he pointed to the activities that are going on in Maryland and one of the things that is funded by our state legislature is a Center of Excellence in Early Intervention headed by Bob Buchanan at the Maryland Psychiatric Research Center but that has a very significant service component that Bob shared.
That they're were bringing out new programs and we have just instituted a learning community bringing three sites now in Maryland that provide early intervention services together so we're ready for the network and national learning community and look forward to that. I look forward any questions that you might have but it's a pleasure to be back here in California. Alright so I'm Renay Bradley, I'm from the Mental Health Services Oversight and Accountability Commission. I'm the Director of Research and Evaluation there, so the Mental Health Services Oversight and accountability Commission, the Commission, for short was established when the Mental Health Services Act was passed in 2004 and then implemented in 2005. So we have a statutory role to provide oversight of the Mental Health Services Act, Prop 63, and hold relevant entities accountable for their role in the mental health services act. So I was hired to basically provide leadership of the statewide evaluation efforts that the Commission does as part of its oversight role. So we have committed to using evaluation as a strategy to provide oversight.
So when you look back at the original intention of the goal and obviously we've heard today from one of the writers of the act and in the audience is another author of the Act who could probably speak much more to this, but one of the overarching goals of the Act is to be able to use those funds a lot of money is coming use those funds to transform California's public community-based mental health system. So when we think about, you know, what does that mean what does that mean to transform the system? You know, when you look at the Act there are explicitly defined goals so the Act wants us to be able to provide timely access to services especially services to those who are on or under or inappropriately served. We want to be able to reduce disparities in access to care. We want to be able to prevent mental illness from becoming severe and disabling, especially within the context of prevention and early intervention. Prevention and early intervention is one of the components of the Act. So how do we know whether we are achieving all these goals? I'm an evaluator so that's where evaluation comes in.
So there is obviously a role for evaluation and it's very interesting. It's been very interesting for me today to hear from my fellow panel participants with regards to what's going on in the state of Maryland and what is going on in Canada with regards to their efforts to try to think about you know, what do we need to do? Why do we want to evaluate? How are we gonna be able to address mental illness and transform our mental health system so they're actually making a difference via use of evaluation. So this is a big challenge and I want to say I'm excited that we are having this particular symposium today because it gives me a little bit of I'm happy to know that others are sort of in with this and we want to be able to have a joint collective, collaborative effort to try to address some of these issues. Because as Darrell said earlier there are still lots of unaddressed questions. We still, people ask me all the time, has the Act actually made a difference? And I do want to say as some of the audience members said earlier, I think yes. You know I think anecdotally there is definitely differences have been made.
I think when you look at programs like Sacramento County's SacEDAPT program, for sure you see lots of positive outcomes. But when you look to roll things up at the statewide level the picture is a little bit foggier. So I guess the one thing I wanted to share is I think when it comes to trying to figure out where do we need to go in terms of identifying useful, meaningful outcomes, measurements of those outcomes, and we really need to think about what are the questions that we need to answer. And it varies based on who you're going to ask.
Alright so, if you're asking a clinician or a provider they might want to know what is going on with my individual clients. Are my clients actually getting better based on the practices that I'm doing? You might then ask the counties. You know what do you need to know to ensure that your county level infrastructure and system is doing is doing the right things you know is leading to again transformation of this overarching system.
And you think about the state and California is perhaps a weird state. We're very large we're very big we're very diverse. We have these fifty eight individual counties. Some of which probably actually a little mini-states all in it of themselves. But still, there is a role because when we think about things like the legislature at the policy level you know as Darrell said earlier there is this risk or this fear that perhaps you know somebody will come in and take away the MHSA.
I don't think anybody wants to see that happen but what really is is needed to ensure that that doesn't happen. Do we need to tell a story? Do we need to tell positive story? Do we need to have information that speaks to we've been really truly rigorously assessing whether or not the system works including successes and failures so that we can identify those failures and figure out what to do instead of that. I throw those ideas out, I certainly don't have all the answers. Although I do want to mention that the outcomes that you were talking about with regards to Maryland are very much outcomes that the Act has identified.
They are those functional level outcomes and we definitely use those as statewide indicators that we are hoping to help us monitor the performance of our mental health system so things like residential status, employment, or education, even like things like suicidal ideation. So again there are some that's a common theme that I just wanted to point out. I will leave it at that and say I'm happy to answer questions along with everybody else and thank you again for inviting me to participate. Ok, so we have a couple of questions I think there's a couple more on their way and I'll certainly be interested in hearing a little bit from the counties about the challenges that they may have in addressing this issue of how do you evaluate outcomes and and how do you use that data to inform you when you're introducing new programs. So, the first is for Howard. Would the outcome measures developed for Maryland apply to California? I suppose I have two responses, one is I'm not sure why not but I would want to test it through a process similar to the one that I described of engagement of stakeholders to test the proposition that they were relevant to see whether there might be additional measures that would be appropriate and some that might be deleted. But it certainly could be an armature for comparison that might speed the process here compared to what we went through in Maryland. This question sort of speaks to the point you just raised.
So what kind of questions would you wish add now looking back? And actually since you spend a lot of time in California I think you are very familiar with our environment here. Are there things that you would elaborate on beyond the standard measures that are hardwired into your system right now that you think would be useful for us here? Honestly, no. Not really. I'm not feeling very creative at the moment.
Well, so here's a little bit of a maybe one that might be relevant for you Renay or maybe for Toby actually. Can we use MHSA dollars to support a collaborative like Maryland that supports training as well as outcomes evaluation and oversight? Yes, commissioner Van Horne says yes, you know, I think you know there are different ways that counties can use the funds. There are obviously is guidance with regards to the Act and how it says that the funds should be allocated and then used within the counties. But you know when you ask that actually, I think about the role of the Commission basically as this one statewide entity that has such an involvement with the mental health services act and one of the strategies that we've actually committed to using in order to provide oversight is providing support to the counties in terms of implementation of programming and evaluating programming and whatnot and very much I've heard from many counties saying hey I'd love to know what's going on in other counties, especially counties that are doing similar things. So when you've all talked about implementing the EPICCAL or whatnot, I mean that idea resonates very much with I think feedback I've been given over the years in terms of what support that the counties would like to have in terms of taking something like SacEDAPT again where we have this one program that has been developed in the context of academia, run here, funded by Sacramento County run by yourselves clinicians and academics researchers. How do you take that and then roll it out in a county that maybe doesn't have a direct academic institution affiliated with it. So I think there's a role for the Commission to provide that level of support instead of collaboration.
Since I let you down on the last question I have answers for things you haven't asked. I should say that the entity, the collaborative that I described, is hardly removing resources that would otherwise go into direct services. Of the block grant you know the alcohol drug abuse and mental health block grants are used to fund this activity. Some portion of the supplement for early intervention services helps to staff trainers centrally in our offices to provide that training.
We have a very strong commitment in Maryland of many many decades of investment in our public behavior health system and providing an infrastructure that's essentially an extension of the staff of the state agency and that collaborative employs between 15 and 20 people on a yearly basis whose main goal is to serve the interests of the behavioral health system in the state and the state provides it using the funding from the federal government some general funding and some creative funding and it's something that any state could do. Back when I was an adviser to the new freedom Commission on mental health during the Bush administration they put forward a recommendation for evidence-based practices that suggested that every state use its block grant to create an infrastructure for these kinds of activities. But in our very permissive federalist system these are only things that are encouraged for the use of the block grant and here in California that permissiveness flows downhill and I think some degree of centralization of if not authority of energy for convening power and all of these kinds of leadership activities could go a long way towards improving the services that are delivered state wide. So there's an answer to a question you didn't ask. I think we call that a commentary or an opinion. Yes, in the editing business that's what we would call that.
So a couple more questions for Renay. Renay, I don't know why there is so much interest in the MHSA OAC but the next question fo you is: Will or can the state mandate PEI MHSA money be used to develop clinical high risk first break programming in each county? And I think qualifier here is that I think the term prevention and early intervention is often to vague. That's an interesting question, so can the state mandate specific programs within the PEI context? To my understanding, if we would that would involve a change in the statute. Either it would involve a change in the statute or within the context of the regulatory process the Commission would need to be interpreting the Act in a very specific way. So as things currently stand, no there isn't a way for the state to necessarily mandate that but I do think there is a role and a way for the state to highlight perhaps on highly effective and preferably cost effective programs that have lead to achievement of the goals in the Act. When we start to identify we had twelve we actually did a statewide evaluation of early intervention programs here across California.
It was a challenge because of the limited consistency across the measures, basically the data that we had available. So we had positive outcomes definitely things like reduce suicide ideation, reduce symptomatology, good stuff, reduced involvement in the justice system, but again we're only talking about a handful of counties. But if we can consistently promote and highlight the fact that yes when the counties implement these programs preferably with fidelity then we're achieving these outcomes. It gives counties I want to use the term incentive but it's not incentive I think in the way that you were talking about giving programs we pay more you know and that's an interesting conversation perhaps to have. When you have counties that are perhaps high-achieving counties that are actually doing things in a way that is leading to achievement of the goals in the act There's nothing really happens at this point to perhaps highlight or reward them beyond simply saying yay, you are doing a good job. And then the other counties obviously want to come along and say of course we want to be able to learn from them which speaks again to the collaborative idea because we want to have sort of cross county learning culture.
But as far as a mandate I think that would require a change in the law. Next question for Renay. Could the MHSA OAC develop a data colleciton system for PEI programs And develop common measures to show the comparison of outcomes. Who asked that question? Alright! I'm sorry did you want to repeat the question. So who does, so let me make sure I've got it, so has or should be the Mental Health Services Oversight and Accountability Commission create or do we have a statewide data collection and reporting system for PEI? Alright so so the answer is no, we do not currently have a statewide data collection reporting system for prevention and early intervention. The Act does highlight the need to evaluate all of the programs that are funded via MHSA funds and specifically PEI as well. The Commission recently went through the regulatory process actually. Regs for those of you who don't know, regulations basically are they're meant to sort of clarify the law.
So when you read the Mental Health Services Act there can be ambiguity one person could interpret it one way and another in another way. So we go through the process of writing regulations, in this case for PEI and innovation, another one of the MHSA components, the Commission was given that authority and within that we have provided requirements for data collection with regards to prevention and early intervention programs. Those regulations I don't believe have been fully approved yet but we're sort of eagerly awaiting and assume at this point that they likely will be. So when that happens, boy we need to be able to provide the counties with support because it's going to be it's gonna be a challenge to get all of the counties to a position where they are going to be able to provide the data that we're requiring. This gets at the heart of the matter here. People ask hey why don't you know more about PEI at the statewide level? And I'll say, oh I just don't have that data, but then we say to counties I'd love the data and I understand all of the challenges that the counties go through in terms of support and resources. So there really is the need to structurally consider what we could be doing across the state to provide the counties with the support that they need so that they can do evaluation again for those different levels of evaluation.
Do it so that they can improve the quality of their own services at the provider level at the county level but also give the state what it means so that we can meet our needs whether it's understanding the statewide story or being able to tell the story to the legislature. And I understand that with Tara there is this pilot program project that is trying to address issues related to what are the measures and then how you would incorporate measurement into the workflow in the real world clinic setting Yes, yes that's very interesting and that's why I'm very interested to learn more about Carolyn's work. So we're working with Tara and others here at UC Davis to really using the SacEDAPT program as a pilot the intention with this is to see if there's something that we can figure out that would later work at a statewide level. Again we're really talking about those 17 counties that were on on your slides before, thank you very much Dr. Heinssen, that showed the different counties that are actually implementing the early psychosis programs.
But we're trying to figure out, getting back to that sort of cost efficacy thing. The Act wants the counties to implement programs that are effective but there's also an intention to provide cost effective programs. So how do we take something like SacEDAPT and figure that out? It's a very interesting question. I mean I'm not I'm not a fiscal person or an economist but it's been very interesting to think about you know, you put all these costs in you know what would be an appropriate way to measure whether or not that's actually saving the system money in the long run.
Especially we're talking about very long-term outcomes so we're excited to have you come on board hopefully soon to provide more insight on that. But, yes, that is a project we're working on currently. So the next question is for Dr. Goldman, it's kind of a more technical one, I think it echoes a point that Rachel Loewy made earlier about hospitalizations. And that is how do you validate your measures, particularly ones like self-report of contact with the justice system or hospitalizations, that sorts of things. It's important for us to validate as many of the measures as we can.
We've gone through a rather extensive exercise partly because the natural connection of this enterprise to our research enterprise and services research means that we would like to be able to use these administratively collected data to do research studies. And currently we write reports on them for the legislature to report up to the federal government but we wanted to go through a more formal process of validating the the measures so that we would feel more confident if we tried to use them for more formal research study. So we undertook a careful look at what other data sources we would have to see whether the accounts that are reported in this interaction between the clinician and the service user could be corroborated and we have a full report on that on adults and the children system is a little younger and we're about six months away from validating it. But we do have multiple data sets part of the systems evaluation center is that we keep all of the administrative data from medicaid. It's available to us.
We work with our administrative services organization, that's the structure we have in a carve out of Medicaid Services in Maryland and we serve with the staff of the you know the data people at the carve-out company, Value Options. So we have a variety of data sources that we use to try to corroborate or to validate these measures. Some of them turn out to be better than others. The hospitalization measure we have we have all the data in the Medicaid system, we don't know about Medicare utilization. So we have to assume that most of our Medicaid eligible service users are doing their hospitalization using that resource.
So it probably is an undercount. And an important one in terms of cost of care for this population. Yes, that's true but it's very interesting I should add that while most of my research of course is economic research. This particular enterprise is not motivated so much by an assessment of the costs. I think it's understood that the setting of budgets in the state is very much a political process.
What the legislators are more interested in is what am I buying with this allocation of resources. And so here the outcomes measurement system is predicated on a concern about performance and public health and socially important outcomes within a constrained budget rather than looking to count the costs. And all of that data that you're generating related to the measures their validity, their reliability, is there a way for that data to impact communities like this and efforts like Renay's obviously Bob's, except for buying a subscription to the Journal of Psychiatric Services? Let me think about that from the perspective of my publisher. I'm certain that you have ready access to the journal through psychiatry online in your library. No, in fact the staff that I work with in our systems improvement collaborative are proud that they do not much contribute to the scientific literature that in fact, these are reports that are useful to the public in Maryland and when there is something of scientific interest they occasionally find a way to publish it so that the readers of psychiatric services or other journals can learn about it which we did to describe the collaborative and to describe some of these activities. But if you go to the state of Maryland and to the Behavioral Health Administration, Behavioral Hygiene Administration I think we call it now, you will find what are called data shorts which are reports from this system and you can actually use this outcomes measurement system and see some of the de-identified data about how programs across the state are doing on these measures. So we do have this data mart that is that I would say service users, program developers are learning to use.
There's not a thirst for knowledge and information that you might expect, but it is being used. but this seems like But this seems like one of those interesting quirks of the way that systems are organized. Where all that work is being done, and unless you were a reader of the scientific literature, which most people who are providers are not, you wouldn't have the benefit of all that's being learned. Because it's staying in Maryland. And so is this something that PEPPNET is going to address? Are you already plugged into these data, Bob as part of their learning healthcare network? Or is this just another one of those lights that's going on as we all get together and talk to one another and realize how much common interest we have? Well Bob and I have talked a little bit about that, so the fact that we live close together and work close together and have been friends for a long time I hope will make that eventuate. People do avail themselves of some of these data and there is a really good opportunity to use it.
But it's also important that the data be standardized and we were talking about that earlier. This is where I think having a certain degree of centralization whether it's federal centralization or state centralization can be quite important. I'm speaking now as an editor also. Some of the papers that have had the hardest time getting through the review process when they're submitted to Psych Services have been reports on the effectiveness of the full-service partnerships in California. And partly it's because of the lack of comparability of the programs and that's only one reason to have high standards and comparability for data collection and dissemination so that people know what it is when you speak about an intervention it has some standardized meaning. From the point of view of quality assurance we assume that there's a tremendous voltage drop from the original design of an intervention and its implementation. That is it degrades we call it drift we call it voltage drop we have lots of terminologies for it but what it is is a decline in performance and if you don't aim high at the beginning and have a high standard you surely are going to end up with a very low standard of care unless you are incredibly attentive to the these quality standards.
I think it's very important both for research for information for data sharing and for the further implementation of things like early intervention programs. I think you've also, so I'd be very interested if there are any additional questions from the audience and Dr. Goldman has already created a precedent for comments as well as questions.
Richard Run, Tara, run. Now we're on, alright. My recollection of the history of Maryland is that there was consultation done through John David Goodrich with the Baltimore Mental Health Corporation 20 some years ago which introduced a lot of the outcomes you're currently using, which by the way have been in statute in California since 1991. That's how we're supposed to judge our programs across the board with the same quality of life outcomes. The other reminder I want to remind is that the evaluative role of the Mental Health Services Oversight and Accountability Commission is not related, not limited to money from the MHSA, it covers the whole spectrum of mental health cash. Dr. Heinssen? And there is someone behind.
I just wanted to make a comment about the voltage drop phenomenon that Howard mentioned. We know it's real it is very commonly reported in the literature. But you know I think we could we can think about this in a couple of different ways.
The way that I choose to think about it, is that we get good outcomes in clinical trials because the trials have a real emphasis on measurement based treatment and measurement of the interventions and attention to quality and feedback. When those programs get put into the into the community, rarely is that same kind of attention paid to all of those things. So that would account to some extent for the the voltage drop but I think it's, and you didn't say this, but it's a mistake I think for us to think that the results in the clinical trials are the upper limit of what can be achieved. Because very often the clinical trials are starting a new program and even though they're getting better results there's a learning curve in the trials. People are learning how to do it, they're getting better over time. So just about the time when the trial is ending, people are hitting their stride and it may be the case that if you continue to pay attention to the fidelity of the the implementation and the outcomes you actually would see that the effectiveness of the program would continue to go up and this is what I would like to see.
This is part of this continuous learning health care system that it doesn't accept that the voltage drop is inevitable or acceptable or something that will just mitigate. It says we have a starting point and we set a target from the starting point and we use the data to help shape a system that is more effective in reaching its goals. That's what I think is kind of the exciting opportunity here.
That this learning health care concept supported by really the first time to be able to conceive and pull off a big data framework is really really gonna set a whole new set of possibilities and is gonna give really give us a chance to establish a system where we can establish a virtuous cycle of improvement rather than the vicious cycle of voltage drop. Clark? Oh, sorry, Sergio. Clark is waiting too.
Howard this is a question for you I was listening to the system outcome measures that you developed in Maryland and one of the pieces that called my attention is the care that you had in doing community engagement and to have measures come from the communities that you work with ask the stakeholders on what matters to them. Approval for you, I think that that is incredibly important because if we are to change community health that's one of the necessary ingredients to make that happen. My question is if you can provide a little bit of details on how that process went about and some of the experiences that you had that made it happen? I'll get the microphone but somebody once said they should invent a microphone to reduce the sound of my voice. But I will use this and tone it down a little bit.
Thank you Sergio, perhaps we learned from your example in your work in communities. I delegated most of this because it requires patience and as you know that's not one of my strong suits. At least not on a day-to-day basis. But after all these years I've learned to be patient for incremental change to eventually get us to some very progressive endpoints, taking the parody insurance for example. So really it took a long time, lots of meetings, lots of back and forth bordering high expectations with the realities of working in the service environment and over time what emerged were a set of common dimensions that of outcome that people could agree to and then our job as the technical experts was to find established measures that could be used and in almost all instances there was a measure that had been used that had psychometric properties. We still tried to select ones that were most practical had good properties.
We would present these back to the stakeholders for their consideration and they might want to change it and then we would have to explain that if you change it we're not sure that you can maintain the properties of reliability and validity. And so went over the years and then it was piloted. So it just is a long developmental process but it's a very valuable process. It's one that has people for the most part believing in the system. But we have to constantly remind people about what the outcomes measurement system is compared to the many other things that they have to fill out and there are widespread misconceptions about. It is an ongoing struggle that Di Seybolt and her staff and the Systems Evaluation Center deal with on a day-to-day week-to-week basis but it's a living organism. But it had to be created in the way we describe if it was going to be accepted and people would engage with it.
And the legislature seems very happy with it and we use it to report to the SAMHSA on the block grant utilization and sort of all is reasonably well in the state of Maryland with respect to it. the prevention measuring Clark, I think we have time for one more question or comment. Have you developed in Maryland a prevention measuring system like your treatment measuring system? California actually visited Maryland and University of Maryland in 1996 or so with some statewide measurements that there was proposing. It never really got off the ground but glad to hear that you guys going it going. But have you done anything on the prevention side? There are no population based measures that would be used if I understand you correctly that would be used to assess the impact of an intervention on the population or the incidence of specific disorders or behavioral occurrences of that ilk.
No, that hasn't been a part of our work. On the other hand, there is work being developed in the early intervention program of a series of measures and there is a widespread set of measures because it's also being used for research. I mean the Center of Excellence was created at the Maryland psychiatric research center with the connection to our group in the department rather than the other way around because the intention was for it to sponsor some research as well as service expansion. But the simple answer is no we don't have specific population-based prevention measures the way we do for our outpatient mental health services. You said only one more, but could I? Okay. Okay, thanks.
Just to maybe touch upon some of the themes that have been said. It seems in California and nationally we would have an opportunity around early psychosis intervention if there were some agreement on key outcomes. The challenge is that if you're not having comparison groups you at least need benchmarks. That's where something like national benchmarks from EPINET might come forward.
Because otherwise you can make a lot of progress but you can't show it. The other thing is to be more mindful about actually having comparison counties that are not doing early intervention which is also something that could be done. The other aspect of this is to have outcomes that are broad enough that they capture what the economist referred to as some of those external factors. The justice involvement, the ER use and we are not always tracking those things in our mental health programs. And then also to be broad enough that some of the stigma, of public engagement, awareness of these programs. Not just the program response literally cuz who gets to them, right? So I think if we were trying mindful around these things like Maryland has been, then it really could help make progress and the other thing I would say would be helpful with EPINET and to California is to register the actual toolkits and programs that are being used cuz people are innovating all over the place.
Let's do this form of CBT let's do this, you know, mixing and matching but we at least need to have that available so that we are borrowing from some common pool as well as having then benchmarks related outcomes we're actually using. So Dr. Goldman, you suggested that I think you implied that there wasn't any necessary conflict between the evaluation expenditures and program expenditures and wondered if you are Dr. Dewa could comment on what do we know at this level, at this stage about optimal levels of expenditure? Because there are opportunity costs and spending money on evaluation. What is the sweet spot on expenditure for evaluation? This is a very live question at the county level and the program level to understand what they're doing and how they know what they're doing. Well we talked about court support.
It's one of things where justice and mental health and health decided to work together. So for a limited period, it was kind of like a pilot project. They were collecting data to see and to make the case that they should be working together and actually their budget shouldn't always be separate, but they should be pooled together because they both benefit when there's some expenditures for people with mental illness when they get in contact with justice. So actually now Ontario has gone to creating an administrative data set for mental illness for people who are in mental health programs in the community and the hospital where they're able to link through their insurance number. So it's kind of like Maryland where you're supposed to do it at baseline when you first come into contact and if you're still in the program again at six months, do another evaluation for these things so that you're able to track.
So that's been useful, but also talking, when several sectors work together that's how we were able to meet the case the business case for why they should do programs. I don't know what the sweet spot is but it is a very important question with respect opportunity cost. You know you can't keep doing evaluations that inevitably will be at the expense of service provision. But our level of expenditures is really rather low. We have a very small state administration and we're just an extension of their staff without the carrying costs the University bears the carrying costs.
So I was gonna ask just one last question of Bob. We know the data that we have that fidelity of mental health treatment across the board, whether it's in the public sector of the private sector in the United States is low. Is there anything in the IOM report that would speak to that the value of data, the level of quality or quantity of data that's needed in order to establish a learning health care system that could move fidelity up in the case of mental health? I think the way that that would be approached is you would be looking for outcomes that you would expect under high quality conditions and if you weren't achieving those outcomes then the system would look begin to backtrack to look at things like quality of the intervention and then target aspects of quality and if fidelity to a particular procedure was a piece of it, it would be targeted and then subsequent data would be used to see whether that strategic intervention had actually solved the problem and the outcomes were returning to the to the expected level. So that's the you know that's kind of the way that it works. Since you made the mistake of handing me the microphone, just one final comment. Ken's comment about benchmarking made me think about this. There was another landmark study of first-episode psychosis in the United States that was conducted in in the state of Iowa by Nancy Andreason over a very long period of time, 26 years.
She enrolled over 500 individuals at the time of their initial psychotic episode and followed them for very very long periods. There's about five hundred people where the data are very dense up to ten years after their initial episode and these data are extraordinarily dense in terms of their life course, in terms of symptoms, in terms of work, school, functioning. There's also a wealth of data about the about the biology related to the illness.
Why am I telling you all of this? Because in this era of data sharing, Nancy has begun to enter all of her data for this 26 years and five hundred individuals into the national database of clinical, but actually it's the net it's the r-dock database at NIMH, and within a year all of this data will be freely available to anyone and it gets, now it's not perfect but it gets to the issue benchmarking because these were people who were in a research study, not a treatment study. So this is a way of looking at what are what is the variation in outcomes associated with usual care in a large number of very well characterized individuals. Assessments every six months for that period of time.
And the vision that we have with Nancy is that we could kind of described the trajectories and the variations of trajectory and this would be one piece of information that programs could use to see whether they are achieving outcomes. This really is gonna be a phenomenal resource made available to the scientific community that can then turn it into tools that programs can use for the kind of benchmarking.
- Hello, my name is Vonda Smith and I am the Scientific Review Officer or SRO for the Bioanalytical Chemistry, Chemistry, Biophysics, and Assay Development or IMST (10) small business…Views: 106 By: CSRNIH
Hi, my name’s Jennifer Mclean and I’m a part of the Research Data team located in the Fisher Library at the University of Sydney. This is a presentation about research data management…Views: 2 208 By: UniSydneyLibrary
>> So, it's with great pleasure that I get to introduce our next speaker. Christian Wheeler is the network planning engineer for the Indiana GigaPOP. And as I announced yesterday, Christian…Views: 84 By: UITS at Indiana University
[music] My name is Dr Sarah Jane Burton, but you can call me SJ. All my students at Macquarie do. I'm an English literary studies academic. What that basically means, is that I'm…Views: 95 By: LEAP (Learning, Education, Aspiration, Participation)