TÀI LIỆU HAY VỀ HỆ THỐNG QUẢN LÝ NĂNG LƯỢNG CỦA IEEE (Energy Management Systems) - Pdf 22

september/october 2004 IEEE
power & energy magazine
49
Factors, trends, and requirements having the biggest
impact on developing such systems in the next decade
1540-7977/04/$20.00©2004 IEEE
by Faramarz Maghsoodlou,
Ralph Masiello, and Terry Ray
A
AS THE CENTRAL NERVOUS
system of the power network, the con-
trol center—along with its energy
management system (EMS)—is a crit-
ical component of the power system
operational reliability picture.
Although it is true that utilities have
been relying on EMS for over two
decades, it is also true that many of the
systems in place today are outdated,
undermaintained, or underused com-
pared with the total potential value
that could be realized.
Many utilities are operating with
EMS technology that was installed in
the early 1990s; thus, the technology
base and functionality is a decade
old. The EMS industry overall did
not markedly alter its technology in
the second half of the decade as the
investment priority in the late 1990s
turned from generation/reliability-

Transmission entities therefore slashed their EMS budgets
and staff, thinking that local independent system operators
(ISOs) would assume responsibility for grid security and
operational planning. Those factors, combined with the fact
that EMS technology has historically lagged behind the IT
world in general, has created a situation where control room
technology is further behind today than it has ever been.
This article examines some of the factors, trends, and
requirements that will have the biggest impact on develop-
ment of energy management systems in the next decade that
are more reliable, secure and flexible, and capable of meeting
the anticipated new requirements of pending legislation,
deregulation, and open access.
Three Lessons
Looking past the system failures or applications not function-
ing that have been so well publicized in blackout analyses,
there are three vitally important lessons that the whole sys-
tems operations community should take away from the
reports issued by the U.S Canada Power System Outage
Task Force.
First, the number of alarms—breaker operations and analog
measurements of voltage and line flows exceeding limits and
oscillating in and out of limits—far exceeded the design points
of the systems deployed in the early 1990s. A more realistic
design philosophy in light of this would be to develop “worst
case” requirements, stipulating that systems must function
when all measured points rapidly cycle in and out of alarm.
Second, the blackout did not occur instantly. Rather, the
voltages collapsed and lines successively tripped over a
period of time. Had the operators had good information and

tem is being stressed in new ways for which it was not
designed” and that “a number of improvements to the sys-
tem could minimize the potential threat and severity of any
future outages.”
50
IEEE
power & energy magazine september/october 2004
There is some ground to be gained by simply getting the
EMS technology that is currently in use within the utility
industry fully functional again to release the true potential
value of the investment.
51
“EPRI believes that the lessons learned from the 14
August power outage will support the development and
deployment of a more robust, functional and resilient power
delivery system,” he said.
In the view of KEMA and META Group, rectifying the
shortcomings of our EMS/SCADA systems will be accom-
plished in three distinct, sequential phases. See Figure 1.
Phase I will encompass an emphasis on control room people,
processes, and policies and is happening this year. Phase II
will encompass communications and control capabilities,
and, in some cases, plans for phase II activities and projects
are already underway. Phase III will be investment in infra-
structure and intelligence, which will take longer to accom-
plish because of difficulties in funding large capital projects
and in getting needed regulatory and political approvals.
Phase I—People, Processes, and Policies
NERC is currently driving phase I, with the formulation of
standards for mandatory EMS performance, availability,

I activities, these are an order of magnitude less costly than
major infrastructure investments that will occur in phase III.
Phase II will include developing a more interconnected
approach to communication and control, for example, devel-
opment of a regional approach to relay setting and coordina-
tion, system planning at a regional level, and implementation
of policies, procedures, and technologies that facilitate real-
time sharing of data among interconnected regions.
The deployment of real-time phasor measurements
around the country is being planned and, as this becomes
available, the regional control systems at ISOs and regional
transmission organizations (RTOs) and NERC regional coor-
dinators will develop applications that can use this informa-
tion dynamically to help guide operators during disturbances.
Phase III—Investment, Infrastructure,
and Intelligence
The emphasis of phase III will be on investment in enhanced
instrumentation and intelligence, along with a renewed
investment in the power system infrastructure and the tech-
nology to better manage it.
The new infrastructure may include, as many propose,
FACTS devices and other new transmission technologies and
devices providing what we think of as ancillary services
(superconducting VAR support, for example). What we do
know is that these prospective new technologies will require
new modeling and new analysis in EMS applications.
EMS and system operations will also have a role to play
in transmission planning for straightforward new transmis-
sion line infrastructure. We have learned that planning studies
not closely aligned with operational data are too abstract.

age levels, real and reactive power flow, phase angles, the
impact of transmission-line loading relief (TLR) measures on
existing and proposed transactions, and network overloads. In
fact, the Department of Energy’s 2002 National Grid Study
recommends visualization as means to better understand the
power system.
Three-dimensional, geo-spatial, and other visualization
software will become increasingly indispensable as electricity
transactions continue to increase in number and complexity
and as power data, historically relevant to a contained group
of entities, is increasingly communicated more widely to the
ISOs and RTOs charged with managing an open grid. Not
only do visualization capabilities enable all parties to display
much larger volumes of data as more readily understandable
computer-generated images, but they also provide the ability
to immediately comprehend rapidly changing situations and
react almost instantaneously.
Three-dimensional visualization is an invaluable tool
for using abstract calculated values to graphically depict
reactive power output, impacts of enforcing transmission
line constraints, line loadings, and voltages magnitudes,
making large volumes of data with complex relationships
easily understood.
Advanced Metering Technology
In this age of real-time information exchange, automated
meter reading (AMR) has set new standards by which the
energy market can more closely match energy supply and
demand through more precise load forecasting and manage-
ment, along with programs like demand-side management
and time-of-use rate structures. Beyond AMR, however, a

What Is the
Significance of
the Data?
What Is the
State of the
System?
• Detect
• Diagnose and Explain
• Respond with Models
Knowledge-Based
Models Enable
Reasoning
september/october 2004 IEEE
power & energy magazine
for more reliable and timely settlement processes are all
drivers for enhanced metering capabilities. This, in turn,
will create a demand for EMS solutions capable of handling
much larger volumes of data and the analytical tools to
manage this data.
More Stringent Alarm Performance
The 2003 blackout drew attention to what has become a
potentially overwhelming problem—SCADA/EMS has little
or no ability to suppress the bombardment of alarms that can
overwhelm control room personnel during a rapidly escalat-
ing event. In a matter of minutes, thousands of warnings can
flood the screens of dispatchers facing an outage situation,
causing them to ignore the very system that’s been put in
place to help them.
Although distribution SCADA has been able to take
advantage of straightforward priority and filtering schemes to

so operators can prioritize actions.
Also to be watched is the promise of the digital dash-
board, heretofore unfulfilled in the control room environ-
ment, but offering the ability to integrate information from
many sources into information portals that provide ready
desktop access to the data each user needs to perform his or
her job functions, with an emphasis on business intelligence
and knowledge management.
Data Warehousing
For many years, utilities have been archiving the operational
(real-time) and nonoperational (historic) information cap-
tured by energy management systems. Today’s thought lead-
ership shift is to focus on how this archived operational and
nonoperational data can be combined with emerging analytic
functionality to meet a host of business needs, for example, to
more readily identify parts of the network that are at the
greatest risk of potential failure. If integrated properly, heads-
up information stored by these systems can also aid utilities
in proactive replacement or reinforcement of weak links, thus
reducing the probability of unplanned events.
A recent study conducted by IBM showed that today, the
typical company utilizes only 2–4% of the data collected in
operational systems. Data marts are one way to more fully
leverage and use data to produce measurable improvements
in business performance.
A data mart, as defined in this article, is a repository of the
measurement and event data recorded by automated systems.
This data might be stored in an enterprise-wide database, data
warehouse, or specialized database. In practice, the terms data
mart and data warehouse are sometimes used interchangeably;

needed to do this is readily available from SCADA systems.
This is an example of time-series data being stored in a data
warehouse designed for the task, such as PI Historian.
Another example is that, to demonstrate compliance with
code of conduct and reliability procedures, it is necessary to
track all the approvals and operational actions associated with
a transformer outage. This is a combination of transactional
information (requests and approvals) and event information
(control actions and alarms), linked over time. This requires
the combination of a transactional data mart triggered by
entries on screens and data collection in real time.
A third example is that reliability centered maintenance is
enhanced if the distortions in the 60-Hz waveforms on electrical
measurements at the transformer can be tracked over time. This
is a waveform analysis over a sampled time series. It requires
interaction with a substation computer and is not easily support-
ed in either a transactional or time-series database. The solution
lies in the kinds of proprietary systems used for similar RCM
work against jet engines and combustion turbines.
Risk Management and Security
Many utilities are coming to the realization that compli-
ance with the Sarbanes Oxley (SOX) can be extended to
mean that EMS systems and their shortcomings present
serious risk issues that must be addressed to prevent the
financial penalties that could accrue as a
result of a long-term outage. Similarly,
when a utility has a rate case pending or
operates under performance-based rates
measured by reliability, there is a direct
connection between the EMS and the

This will have implications on the QA and software life-cycle
management tools and methods used by vendors and consult-
ants as well as utilities.
Finally, there is a need to show compliance with NERC,
ISO, code of conduct, and other standards for operations.
EMS must be enhanced to provide easily recovered audit
trails of all sorts of actions and system performance to pro-
vide compliance reports and analyses.
Advanced Network Analysis Applications
Another key factor that is critical to the success of the EMS
technology of tomorrow is the incorporation of advanced net-
work analysis algorithms and applications. Most systems in
place today are still based on the Newton-Raphson power
flow analysis and related/derivative methodologies, with their
inherent shortcoming being that they fail to converge when
network conditions are too far from nominal, especially in
times of near voltage collapse. For real-time calculations, dif-
54
IEEE
power & energy magazine september/october 2004
figure 3. Sarbanes Oxley relevance to EMS.
Sarbanes-Oxley Act
Compliance
Financial Impact
of Loss of the
EMS System
Certification of
Cybersecurity
and Quality of
Software

technology considerably since it was introduced to electric
power. Techniques such as sequential state estimation are
worth looking at, especially for ISO/RTO applications where
the time synchronization of the analog measurements is not as
robustly enforced.
Operator/Dispatcher Training Simulator
Most EMS systems deployed in the 1990s already include
OTS functionality, but a recent survey initiated by KEMA
and META Group indicates that many are not in use, primari-
ly due to the lack of staff to support them and conduct the
training. Based on the recommendations of NERC and other
industry and regulatory groups, this will change as more utili-
ties take the steps needed to leverage the technological capa-
bilities they already possess.
As with other network analysis applications, OTS needs to
have robust algorithms that are capable of simulating abnor-
mal voltage conditions. It is also imperative that the represen-
tation of network and equipment models in OTS be
consistent with those used in real-time applications to realisti-
cally simulate current and potential future conditions. Ideally,
all model updates in the real-time system should be automati-
cally propagated to OTS to keep the two models in synch.
The OTS will also be called upon to support “group” training
of transmission operations and ISO operation; therefore, the
network and process modeling has to be coordinated hierar-
chically across the individual utilities and the ISO.
Communication Protocols
EMS systems must have the capacity to talk to “legacy,” i.e.,
preexisting, remote terminal units (RTUs) and, thus, are
severely handicapped today in that many still rely on serial

an object model represents all aspects of the business, includ-
ing what is known, what the business does, the business con-
straints, and the business’ interactions and relationships.
More practically, a good EA can provide the first com-
plete view of a utility’s IT resources and how they relate to
business processes. Getting from a utility’s existing or base-
line architecture to an effective EA requires defining both a
target architecture and its relationship to business processes,
as well as the road map for achieving this target. An effective
EA will encompass a set of specifically defined artifacts or
systems models and include linkages between business objec-
tives, information content, and information technology capa-
bilities. Typically, this will include definitions of
✔ business processes, containing the tasks performed by
each entity, plus anticipated change agents such as
pending legislation or regulations that might impact
business processes
✔ information and the way it flows among business processes
✔ applications for processing the information
✔ a model of the data processed by the utility’s informa-
tion systems
✔ a description of the technology infrastructure’s func-
tional characteristics, capabilities, and connections.
55
Though no industry-standard technical/technology reference
model exists for defining an EA, it is clear that component-
based software standards, such as Web services, as well as pop-
ular data-exchange standards, such as the extensible markup
language (XML), are preferred, as are systems that are interop-
erable, scalable, and secure, such as Sun Microsystem’s Java 2,

the Internet. The bulk of today’s IT systems, including Web-
oriented systems, can be characterized as tightly coupled
applications and subsystems.
Monolithic systems like these are sensitive to change, and
a change in the output of one of the subsystems often causes
the whole system to break. A switch to a new implementa-
tion of a subsystem will also often cause a breakdown in col-
laboration among systems. As scale, demand, volume, and
rate of business change increase, this weakness can become
a serious problem marked by unavailable or unresponsive
Web sites, lack of speed to market with new products and
services, or inability to meet new business opportunities or
competitive threats.
As a result, the current trend is to move away from tightly
coupled monolithic systems
and towards loosely coupled
systems of dynamically bound
components. Web services
provide a standard means of
interoperability between dif-
ferent software applications
running on a variety of plat-
forms or frameworks.
They are comprised of
self-contained, modular appli-
cations that can be described,
published, located, and
invoked over the Internet, and
the Web services architecture
is a logical evolution of

BizTalk, XML.org
OAGIS, UIG-XML, CCAPI, CIM
XML
J2EE
(EJB)
CORBA
MSFT
.Net
(COM+)
IP
UMLWorkflow
Semantics
Format
Interaction
Security
Integrity
Transport
september/october 2004 IEEE
power & energy magazine
✔ enabling just-in-time integration
✔ reducing complexity by encapsulation
✔ enabling interoperability of legacy applications.
Cybersecurity Standards
Used throughout the industrial infrastructure, control systems
have been designed to be efficient rather than secure. As a
result, distributed control systems (DCSs), programmable
logic controllers (PLCs), and supervisory control and data
acquisition (SCADA) systems present attractive targets for
both intentional and unintentional catastrophic events.
To better secure the control systems controlling the critical

mous capital investment, estimated by some at US$10 billion a
year for the next decade, at the very least, there are numerous
steps that can be taken toward greatly enhanced reliability
through much smaller investments in processes and technology.
Four key pieces of advice are as follows: One, there is
some ground to be gained by simply getting the EMS tech-
nology that is currently in use within the utility industry fully
functional again to release the true potential value of the
investment. Two, reinvigorate OTS and training programs.
Three, investigate more robust approaches to network analy-
ses, and, four, take the steps necessary to minimize the poten-
tial financial impact of Sarbanes Oxley.
For Further Reading
U.S Canada Power System Outage Task Force. Final Report
on the August 14th Blackout in the United States and Canada
[Online]. Available: />“Emerging tools target blackout prevention,” Comput. World,
Aug. 25, 2003. [Online]. Available: u-
ritytopics/security/recoverystory/0,10801,84322,00.html
Tuscon Electric Power press release [Online]. Available:
s/pr_20040526.html
Elequant launch press release [Online]. Available:
s/pr_20040605a.html
Biographies
Faramarz Maghsoodlou is an executive consultant and
director of systems and technology services with KEMA,
Inc. With over 25 years of experience in the energy and
software industry, he specializes in energy systems plan-
ning, operation, and optimization and enterprise software
applications. He can be reached at fmaghsoodlou@kema-
consulting.com


Nhờ tải bản gốc

Tài liệu, ebook tham khảo khác

Music ♫

Copyright: Tài liệu đại học © DMCA.com Protection Status