- Academic Calendar
- Undergraduate Program
- Graduate Program
- Exchange Programs
- Industrial Partners
Duygu Taş -
We study a class of chance-constrained two-stage stochastic optimization problems
where second-stage feasible recourse decisions incur additional cost. In addition,
we propose a new model, where ``recovery" decisions are made for the infeasible
scenarios to obtain feasible solutions to a relaxed second-stage problem. We develop
decomposition algorithms with specialized optimality and feasibility cuts to solve this
class of problems. Computational results on a chance-constrained resource planing
problem indicate that our algorithms are highly effective in solving these problems
compared to a mixed-integer programming reformulation and a naive decomposition
This is joint work with Xiao Liu and Jim Luedtke
Bio: Simge Küçükyavuz is an Associate Professor in the Integrated Systems Engineering Department at the Ohio State University. She received her MSc and PhD degrees from the University of California, Berkeley, and her BS degree from the Middle East Technical University. Her interests are in mixed-integer programming, large-scale optimization, optimization under uncertainty, and their applications. Her research is supported by multiple grants from the National Science Foundation, including the 2011 CAREER Award. She serves on the editorial boards of several journals, including Computational Optimization and Applications, Networks, and Journal of Global Optimization.
For many decision making problems under uncertainty, it is crucial to specify decision
makers' risk preferences based on multiple stochastic performance measures. We
consider a class of multicriteria stochastic optimization problems that feature risk-
averse benchmarking constraints based on conditional value-at-risk and second-
order stochastic dominance. We develop alternative mixed-integer programming
formulations and solution methods for cut generation problems arising in optimization
under such multivariate risk constraints. We give the complete linear description
of two non-convex substructures appearing in these cut generation problems. We
present computational results that show the effectiveness of our proposed models and
(Joint work with Simge Küçükyavuz, The Ohio State University)
Nilay Noyan is an Associate Professor in the Manufacturing Systems and Industrial Engineering Program at Sabancı University, Turkey. She is a recipient of the Young Scientist (BAGEP) Research Award of the Science Academy, Turkey. She received her Ph.D. degree in operations research from Rutgers University, USA, in 2006. Her research interests include optimization, stochastic programming, risk modeling, and stochastic optimization applications with an emphasis on sustainable urban transportation, airline revenue management, and disaster relief network design.
Optimization can play a significant role in computational mechanics. In this talk, I
will present two interface mechanics problems where the choice of the optimization
algorithm determines accuracy, efficiency and robustness. The first half of the talk
will be devoted to contact mechanics problems which are highly nonlinear primarily
due to large deformations and the presence of contact constraints. Specifically, I will
concentrate on the formulation and application of a class of interior point methods
to the weak enforcement of the constraints. In the second half, I will summarize our
ongoing work on micro-texture design for thin-film lubrication. Here, the problem is
posed in the context of the homogenization theory which constitutes a link between
the objective function and the pixelated optimization space. Overall, I aim to give a
balanced overview of our goals (mechanics and computation) and the employed tools
(discretization and optimization).
İlker Temizer received his B.S. degree (2001) from Boğaziçi University and his M.S. (2003) and Ph.D. (2005) degrees from the University of California, Berkeley in Mechanical Engineering. Subsequently, he joined the Institute of Continuum Mechanics of the Leibniz University at Hannover as a post-doctoral researcher where he held teaching and research responsibilities in addition to the leadership of a junior research group. Since September 2010, he is a faculty member at Bilkent University. His research efforts are focused on the theoretical and numerical aspects of computational mechanics that are associated with multiscale-multiphysics modeling strategies for heterogeneous materials and interfaces.
This paper studies the capacity investment decisions of an agri-processor that uses a
commodity input to produce a commodity output and a byproduct. Using a multi-period
model, we study the one-time processing and (output) storage capacity investment decisions,
and the periodic processing and inventory decisions in the presence of input and output spot
price uncertainties and uncertain production yield. We identify three capacity investment
strategies, investing in storage-dominating, processing-dominating or mixed portfolio, and
provide conditions under which each strategy is optimal. Using a calibration based on the
palm industry, we provide rules of thumb for capacity management: The processor should
decrease its processing capacity with an increase in price correlation; and with an increase
(a decrease) in input or output price volatility when this volatility is low (high). The storage
capacity should be adjusted in a similar fashion only as a response to a change in output price
volatility, otherwise it should not be altered. We find that not accounting for the byproduct
revenue or inventory holding possibility in capacity planning leads to sizeable profit loss.
Ignoring production yield uncertainty has a significant negative impact on profitability if the
capacity planning is made based on the maximum yield possible, as often done in practice;
but it has an insignificant impact if the planning is made based on the average yield.
Quang (Jason) D. Nguyen, Carlson School of Management, University of Minnesota Tong Wang, National University of Singapore
Onur Boyabatli is Assistant Professor of Operations Management at the Lee Kong Chian School of Business, Singapore Management University. He holds a Ph.D. in Technology and Operations Management from INSEAD, University, Turkey.
His research interests are in the area of integrated risk management in supply chains. In particular, he studies several operational issues (such as capacity management, procurement, farm-yield management, product pricing) of supply chains agents (such as farmers, processors) in different agricultural industries (such as live-animal, oilseeds, cocoa, grains). He also studies the technology and capacity management including integration with financing and financial risk management in capital- intensive industries (such as automotive).
His research papers have been published in Management Science. He is the co-editor of “Handbook of Integrated Risk Management in Global Supply Chains.” He was awarded the Management Science Distinguished Service Award in 2012. He was nominated for the IFORM (Interface of Finance, Operations and Risk Management) SIG (Special Interest Group) Chair in 2014. He co-organized the IFORM at the MSOM conference in 2012 and 2014, the 5th Management in Operations and Global Supply Chains in 2008, and co-chaired the M&SOM-IFORM Cluster at the 2012 and 2013 INFORMS Annual Meetings.
Assemble-to-order (ATO) systems are popular in many industries including, but not limited
to, automotive, consumer electronics, and online retailing, where offering customized
products and services within a short response window plays a vital role. Although popular
in practice, characterizing optimal control policies for ATO systems has remained a
longstanding challenge in the field of Operations Management. This talk will present our
progress towards solving this important and difficult problem.
In the first part, we address the problem of inventory replenishment and allocation for a manufacturer who sells an assembled product as well as individual spare parts. We model the problem as a Markov decision process, with state space consisting of component inventory levels. By partitioning the state space into multiple disjoint lattices based on products’ component requirements, we establish the optimality of base-stock production and stock rationing policies on each lattice. Our computational results reveal the practicality of such lattice-dependent base-stock and rationing policies for ATO systems with general product structures.
In the second part, we present an approximate dynamic programming method to the general ATO problem, under Markovian assumptions on production and demand. We approximate the optimal cost function by reducing the state space of the original problem via a novel aggregation technique, which may significantly alleviate the computational burden. We show the optimality of lattice-dependent base-stock and rationing policies for the aggregate problem. We also derive error bounds for this approximation and provide computational results. We conclude with a discussion of future research directions in this field.
Parts of this research are joint work with Alp Akcay from Bilkent University, Mustafa Akan and Alan Scheller-Wolf from the Tepper School of Business, Carnegie Mellon University.
Short Bio: Emre Nadar is an assistant professor in the Department of Industrial Engineering at Bilkent University. He received his B.S. degree in Industrial Engineering from Bilkent University, and his M.S. and Ph.D. degrees in Operations Management from the Tepper School of Business, Carnegie Mellon University. His research interests include stochastic dynamic programming, queueing theory, supply chain management, and sustainable operations.
Digital control and communication are being used to modernize urban infrastructures,
such as electrical grids and water systems, to meet the increasing demands of future
cities for size, performance, and sustainability. While the added intelligence provides
desirable performance features, it also adds to the system's complexity, which
raises equally critical concerns for safety and security. In this talk, I will overview
research that analyzes cyber-physical system (CPS) vulnerabilities such as real-time
consumer pricing in emerging smart-grid demand response. I will also propose a
design methodology to monitor the (not fully trustworthy) high performance operation
of a smart grid, but assure its fallback to a
system drifts too close to unsafety.
David Yau received the B.Sc. (first class honors) from the Chinese University of Hong Kong, and M.S. and Ph.D. from the University of Texas at Austin, all in computer science. He has been Professor of Information Systems Technology and Design at SUTD since 2013. Since 2010, he has been Distinguished Scientist at the Advanced Digital Sciences Center, Singapore, where he leads the Smart Grid IT research program. Prior to Singapore, he was Associate Professor of Computer Science at Purdue University (West Lafayette), USA.
David’s research interests are in network protocol design and implementation, CPS security and privacy, quality of service, network incentives, and wireless and sensor networks. He received a CAREER award from the U.S. National Science Foundation. He was also the recipient of an IBM PhD Fellowship. He won Best Paper award from the 2010 IEEE International Conference on Multi-sensor Fusion and Integration (MFI). His recent papers in 2008 IEEE Int'l Conf. Mobile Ad-hoc and Sensor Systems (MASS), 2013 IEEE Int'l Conf. Pervasive Computing and Communications (PerCom), 2013 IEEE Int'l Conf. Cyber-Physical Systems, Networks, and Applications (CPSNA), and 2013 ACM BuildSys were Best Paper finalists.
A historical perspective on societal changes is given; critical issues, challenges and myths of
technology management are presented; management of change in technical organizations is
explained; and technological leadership is discussed in this presentation.
Dundar F. Kocaoglu is Founding Chairman and Professor of the Engineering and Technology Management Department at Portland State University, Director of RISE (Research Institute for Sustainable Energy), and President and CEO of PICMET (Portland International Center for Management of Engineering and Technology). His research areas include decision theory, technology management, project management, innovation management, technology evaluation, R&D management, and resource optimization.
Dr. Kocaoglu received his B.S. in Civil Engineering from Robert College in 1960, M.S. in Structural Engineering from Lehigh University in 1962, M.S. in Industrial Engineering from the University of Pittsburgh in 1972, and Ph.D. in Operations Research and Systems Management, also from the University of Pittsburgh in 1976. He joined Portland State University to start the Engineering Management Program in 1987. The program has become the Engineering and Technology Management Department. It now has more than 250 graduate students, including about 40 pursuing their PhD degrees. Prior to 1987, Dr. Kocaoglu was the director of a similar program at the University of Pittsburgh. He has supervised 26 PhD dissertations in the two universities.
Dr. Kocaoglu has worked in industry as an engineer and project manager from 1962 to 1971. He has been a consultant on engineering and technology management since 1973. His clients include Westinghouse, Brown Boveri, IBM, Intel Corporation, Hewlett-Packard, Xerox, Tektronix, II- Morrow, Cascade Microtech, several other small-to-medium sized technology-based companies, more than 20 universities, R&D Centers and the United Nations. He has served in National Research Council committees for the evaluation of the NIST (National Institute of Science and Technology) manufacturing centers, and for the improvement of U.S. Department of Energy decision making processes for nuclear waste disposal and decommissioning. He has also been an NSF (National Science Foundation) reviewer for research proposals, and served on NSF panels for the evaluation of research centers.
Dr. Kocaoglu has published about a hundred papers, delivered keynote speeches and made more than 200 presentations on engineering and technology management in international conferences. He is the author, editor or co-editor of six books. He was the Editor-in-chief of *IEEE Transactions on Engineering Management* from 1986 to 2003, and the Series Editor of John Wiley Book Series in Engineering & Technology Management from 1985 through 1998. Dr. Kocaoglu is the recipient of the IEEE Fellow Award, IEEE Centennial Medal, IEEE Millennium Medal and PICMET Fellow Award, all of which were awarded for "leadership in the development of the Engineering Management discipline". He was also named Distinguished Research Mentor by the National Science Council of Taiwan. Dr. Kocaoglu is listed in *Who Is Who In The World, Who Is Who In America, Who Is Who In Education, Who Is Who in Engineering, American Men and Women in Science* and more than a dozen other reference publications.
During the disaster response phase of the emergency relief, the aim is to reduce
loss of human life by reaching disaster affected areas with relief items as soon as
possible. Debris caused by the disaster blocks the roads and prevents emergency
aid teams to access the disaster affected regions. Deciding which roads to clean in
order to transport relief items is crucial to diminish the negative impact of a disaster
on human health. Despite the significance of the problem during response, in the
literature debris removal is mostly studied in recovery or reconstruction phases of a
disaster. The aim of this study is providing solution methodologies for debris removal
problem in response phase. In particular, debris removal activities on certain blocked
arcs have to be scheduled in order to reach a set of critical nodes such as schools and
hospitals. Two mathematical models are developed with different objectives. The first
model aims to minimize the total time spent to reach all critical nodes whereas the
second minimizes weighted sum of visiting times where weights indicate the priorities
of critical nodes. Since obtaining solutions quickly is important in the early post-
disaster, heuristic algorithms are also proposed. Two data sets belonging to Kartal and
Bakırkoy districts of İstanbul are used to test the mathematical models and heuristics.
Keywords: Debris management, debris removal, relief transportation, node routing.
In this study, the coordination of a manufacturer and a retailer in a supply chain
is considered, in a single period environment where the manufacturer has carbon
emission restriction with trade option. The customer demand over a period is
assumed to be a random variable with an arbitrary distribution. Two types of
contracts, namely the buyback and the target rebate contracts are considered. For
each type, the contract parameters which achieve channel coordination have been
studied. The models show that for both contract types, under specific parameter
settings coordination is achievable. In particular, we show that under a buyback
contract with a carbon trader manufacturer, coordination can be achieved, even if
no returns are allowed, contrary to the findings of Pasternack, who first studied
a buyback contract in a setting where carbon emissions are not taken into
consideration. The results are illustrated by numerical examples.
Breast cancer has a vital importance in women's life. In the world, breast cancer
incidence and mortality rates are increasing. Considering the burden of disease,
every year 1.4 million women get breast cancer and about 500,000 women die due
to breast cancer. Breast cancer has a high incidence and mortality rates especially in
developing countries, where late diagnosis of cancer is also increasing the disease
burden. Lack of knowledge of the exact causes of breast cancer increases the
importance of early detection. The most effective way of early detection is to apply
mammography screening. Screening the accurate target population increases the rate
of early detection of breast cancer and lessens the economic and health burden of
disease. In this study, two simulation models were constructed in order to analyze
the population-based mammography screening programs for Turkey. One model was
run for 10 years for validation purpose while the other one was run for the women
born in 1980 during their lifetime to analyze several screening programs. Costs and
health outcomes of the screening policies were examined by doing cost-effectiveness
analysis. By doing this analysis with second model, a good screening policy is
determined in terms of beginning and final age of screening and screening frequency.
Keywords: Breast cancer, screening programs, mammography, simulation
The aim of this study is to create flight schedules which are less susceptible to
unexpected flight delays. To this end, we examine the block time of the flight in two
parts, cruise time and non-cruise time. The cruise time is accepted as controllable
within some limit and it is taken as a decision variable in our model. The non-cruise
time is open to variations. In order to consider the variability of non-cruise times
in the planning stage, we propose a nonlinear mixed integer two stage stochastic
programming model which takes the non-cruise time scenarios as input. The
published departure times of flights are determined in the first stage and the actual
schedule is decided on the second stage depending on the non-cruise times. The
objective is to minimize the airline's operating and passenger dissatisfaction cost. Fuel
and CO2 emission costs are nonlinear and this nonlinearity is handled by second order
conic inequalities. Two heuristics are proposed to solve the problem when the size of
networks and the number of scenarios increase. A computational study is conducted
using the data of a major U.S. carrier. We compare the solutions of our stochastic
model with the ones found by using expected values of non-cruise times and the
company's published schedule.
Keywords: Airline Scheduling, Stochastic Programming, Robust Optimization, Nonlinear Programming
Newsboy problem has been studied in the literature extensively. The classical
newsvendor, representing the risk neutral decision maker, determines the optimal
order/production quantity by maximizing the expected profit or minimizing the
expected total cost of a single period with stochastic demand. This approach is
not suitable if one also aims to reduce the chances of facing unexpected losses
due to demand uncertainty. In this thesis, two problems are investigated with a
single product newsvendor under CVaR maximization objective. The first problem
addresses the newsvendor model with two different carbon emission reduction
policies, namely, mandatory emission allowance and carbon emission trading
mechanism. In the second problem, as an extension of the first one, a newsvendor
with multiple resource constraints is considered for the cases where the resources
have quotas with trade options. Analytical expressions for optimal order/production
quantities are determined together with the optimal trading policy and numerical
examples are provided.
Keywords: Newsvendor, CVaR, Carbon Emissions, Cap and Trade.
We study the multi-product inventory pricing problem under stochastic and
price sensitive demand. We have initial inventory of m resources whose
different combinations form n products. Products are perishable and need
to be sold by a deadline. Demand for each product is modeled as a non-
homogeneous Poisson process whose intensity is a function of the current
price only. The aim is to set the price of each product over the selling period
to maximize the expected revenue. This problem is faced in various industries
including retail, airlines, automobile, apparel, hotels and car rentals. Our
contributions are twofold. First, we provide a closed form solution for the
special case of exponential price response where the elasticity parameters
of the demand function of all products are equal. Second, we develop two
classes of dynamic pricing heuristics: one using the value approximation
approach of dynamic programming and the other using the deterministic
version of the problem. Our numerical analysis indicates that dynamic pricing
yields significantly higher revenues compared to fixed price policies. One of
the dynamic pricing heuristics based on the deterministic problem provides
around 5-15% additional revenue compared to fixed price policies. Moreover,
the two value approximation heuristics that we suggest result in at most 0.5%
and 3.4% gaps in the expected revenue compared to the optimal dynamic
pricing policy for general form of exponential price response. These additional
revenues can have a profound effect on the profitability of firms, so dynamic
pricing should be preferred over fixed price policies in practice.
Keywords: Dynamic pricing, network yield management, inventory pricing
An American option is an option that entitles the holder to buy or sell an asset
at a pre-determined price at any time within the period of the option contract.
A perpetual American option does not have an expiration date. In this study,
we solve the optimal stopping problem of a perpetual American stock option
from optimization point of view using linear programming duality under the
assumption that underlying’s price follows a discrete time and discrete state
Markov process. We formulate the problem with an infinite dimensional linear
program and obtain an optimal stopping strategy showing the set of stock-
prices for which the option should be exercised. We show that the optimal
strategy is to exercise the option when the stock price hits a special critical
value. We consider the problem under the following stock price movement
scenario: We use a Markov chain model with absorption at zero, where at
each step the stock price moves up by ∆x with probability p, and moves
down by ∆x with probability q and does not change with probability 1 − (p
+ q). We examine two special type of exotic options. In the first case, we
propose a closed form formula when the option is collar type. In the second
case we study multiple type options, that are written on multiple assets, and
characterize the exercise region for different multiple type options.
To place emphasis on profound relations among airline schedule planning
problems and to mitigate the effect of unexpected delays, we integrate robust
schedule design, fleet assignment and aircraft routing problems within a daily
planning horizon while passengers' connection service levels are ensured via
chance constraints and maintenance requirements are satisfied. We propose
a nonlinear mixed integer programming model. In the objective function,
the cost functions due to fuel consumption and CO2 emission cost involve
nonlinearity. This nonlinearity is handled by second order conic reformulation.
The key contribution of this study is to take into account the cruise time
control for the first time in an integrated model of these three stages of airline
operations. Changing cruise times of flights in an integrated model enables
to construct a schedule to increase utilization of efficient aircraft and even to
decrease total number of aircraft needed while satisfying same service level
and maintenance requirements for aircraft fleeting and routing. Besides, for
the robust schedule design problem, it is possible to improve the solution
since a routing decision could eliminate the necessity of inserting idle time
or compressing cruise time. In addition, we propose two heuristic methods
to solve large size problems faster than the integrated model. Eventually,
computational results using real data obtained from a major U.S. carrier are
presented to demonstrate potential profitability in applying the proposed
We study the multiperiod pricing problem of a service firm with capacity levels that vary over
time. Customers are heterogeneous in their arrival and departure periods as well as valuations,
and are fully strategic with respect to their purchasing decisions. The firm’s problem is to
set a sequence of prices that maximizes its revenue while guaranteeing service to all paying
customers. We provide a dynamic programming based algorithm that computes the optimal
sequence of prices for this problem in polynomial time. We show that due to the presence
of strategic customers, available service capacity at a time period may bind the price offered
at another time period. This phenomenon leads the firm to utilize the same price in multiple
periods, in effect limiting the number of different prices that the service firm utilizes in
optimal price policies. Also, when customers become more strategic (patient for service), the
firm offers higher prices. This may lead to the underutilization of capacity, lower revenues,
and reduced customer welfare. We observe that the firm can combat this problem
ability, beyond posted prices, to direct customers to different service periods.
This is a joint work with Christian Borgs (Microsoft), Jennifer Chayes (Microsoft), Ilan Lobel (NYU), and Hamid Nazerzadeh (USC).
*Bio: *Ozan Candogan is an assistant professor at the Fuqua School of Business, where he is a member of the Decision Sciences area. Prior to joining Fuqua, he received his Ph.D. and M.S. degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. His research interests are in mechanism design, decision making in social and economic networks, and analysis of dynamic strategic interactions. He is a recipient of the 2012 Microsoft Research Ph.D. fellowship, and 2009 Siebel Scholarship.
Manufacturing of micro scale parts and components made from engineering materials
having complex three dimensional surfaces are used in today’s high value added
products. These components are commonly used in biomedical and consumer
electronics industries and fabrication of micro parts at a low cost without sacrificing
quality is a challenge. Micro mechanical milling is a viable technique which can be
used to produce micro parts, however the existing knowledge base on micro milling is
limited compared to macro scale machining operations.
The subject of this thesis is micro scale pocket milling operations used in micro mold making which are used in micro plastic injection in mass production polymer micro parts. Modeling of pocket milling while machining of basic pocket shapes are considered first. The developed milling model is then extended to more complex mold shapes. Minimum total production time is used as the objective to solve single pass, multi pass, and multi tool problems. Case studies are presented for each problem type considering the practical issues in micro milling. A software has been developed to optimize machining parameters and it is shown that the developed pocket milling optimization model can successfully be used in process planning studies.
Keywords: Micro milling, tool path generation, sharp corner milling, pocket milling, optimization
As the world economy becomes increasingly conscious of the environmental concerns,
evidence suggests that companies who make better business decisions to consider the interests
of other stakeholders, including the human and natural environments, will succeed. While the
environmental regulation policies aim to protect consumers, employees and the environment,
cost of compliance should not deter companies to do business. To meet regulatory
requirements, firms can change their production/inventory related operations and/or invest in
green technologies. In this talk, we introduce two inventory models for providing guidance to
the companies to make better inventory decisions while utilizing the available environmental
technologies, and for helping policy-makers understand the implications of the different
regulation policies on the profitability of a company.
In the first model that will be introduced, we analyze a retailer’s joint decisions on inventory replenishment and emission reduction investment under three emission regulation policies; cap, tax and cap-and-trade. Our results provide guidelines and insights about five issues: (i) how much the retailer should order at each replenishment and how much he/she should invest in emission reduction to minimize long-run average costs, (ii) what the impact of having an investment option is on the retailer’s annual costs and emissions, (iii) how the retailer’s annual costs and emissions under an emission regulation policy compare to those when no regulation is in place, (iv) how the retailer should choose among different investment options available and (v) how the different regulation policies compare in terms of the retailer’s annual emissions and costs. In the second model, we also consider the retailer’s supplier, and study coordination and contracting between the two parties. We propose coordination mechanisms for this system to increase its profitability. We show that, even though such coordination mechanisms help the retailer and the supplier decrease their costs without violating the emission regulations, they may result in increased carbon emission under certain circumstances.
When we use simulation to estimate the performance of a stochastic system, the simulation
model often includes multivariate input models (e.g., product demands and exchange rates in
a global supply chain, processing times of a workpiece across several work centers) that are
estimated using finite samples of real-world data. The finiteness of the samples introduces
estimation errors in the input models, affecting the simulation output. However, the
propogation of input model uncertainty to output uncertainty is rarely considered in
simulation output analysis.
In this seminar, a discussion of input uncertainty issues and the state-of-the-art approaches for input uncertainty quantification -- that evolve around independent input models -- will be presented. There are very recent studies in the literature to account for input uncertainty with correlated input variables. However, their common input modeling technique assumes continuous marginal distributions for each input. In this study, we will use an alternative modeling methodology called normal copula graphical model that allows us to learn about the dependence between input variables which can be binary, categorical with ordered categories, count or continuous. In particular, we first encode the conditional indepedencies between input variables in an undirected graph. We then build a Markov chain Monte Carlo algorithm to obtain the posterior distribution of the precision matrix associated with a set of multivariate-normal latent variables that underlie the original multivariate input model. We use this posterior distribution to account for the input uncertainty due to the unknown dependence structure. Subsequently, we capture the input uncertainty due to the unknown marginal distribution of each input variable, and present a metamodel-assisted framework to propagate the input uncertainty to the output uncertainty. Finally, we consider a continuous- review assemble-to-order system with a correlated demand-arrival process, and show that our model improves the coverage of the confidence intervals for the per-period expected cost. The seminar will conclude with an overlook of the future research directions in this area.
The subject of this talk is macro- and micro-scale machining modeling of some
difficult-to-cut materials. Our group’s research focus is the modeling of material and
manufacturing process interactions, and we aim to develop physics-based machining
models and use them to improve the manufacturability of such materials. This talk
will present our research on macro-scale machining of carbon fiber reinforced
polymers, which have become an important material in the aerospace industry and
micro- scale machining of titanium alloys, which are commonly used in biomedical
parts. In addition, some future research directions will be discussed.
Yiğit Karpat received his B.Sc from Dokuz Eylül University, and his M.Sc from Middle East Technical University, both in Mechanical Engineering. In 2007, he received his Ph.D in Industrial/Manufacturing Engineering from Rutgers University and joined Bilkent University as an Assistant Professor. He currently holds dual appointments in the Department of Industrial Engineering and the Department of Mechanical Engineering at Bilkent, where he pursues research on modeling material/process interactions in manufacturing and is actively involved in the Micro System Design and Manufacturing Center established at Bilkent University with support from the Ministry of Development.
In a telecommunications network the reach of an optical signal is the maximum
distance it can traverse before its quality degrades. Regenerators are devices to extend
the optical reach. The regenerator placement problem seeks to place the minimum
number of regenerators in an optical network so as to facilitate the communication of
a signal between any node pair.
In this talk, the regenerator location problem is revisited from the hub location perspective and two new facets of it involving the challenges of survivability are introduced. Under partial survivability, our designs hedge against failures in the regeneration equipment only whereas under full survivability failures on any of the network nodes are accounted for by the utilization of extra regeneration equipment. All three variations of the problem are discussed in a unifying framework involving the introduction of individual flow-based compact formulations as well as cut formulations and the implementation of branch and cut algorithms based on the cut formulations. We report computational results evaluating the performances of the proposed solution methodologies and propose possible future research extensions in flexible optical networks.
(Joint work with B. Yıldız)
Surgery represents a large portion of a typical hospital’s total expenses and revenues.
Therefore, operating room (OR) scheduling is an important operational problem for health
care delivery systems. These problems are computationally difficult to solve due to not only
their combinatorial structure and the existence of multiple resources such as ORs and
surgeons, but also the uncertainty in surgery durations.
In this study, we first consider a stochastic multi-OR scheduling problem with multiple surgeons where the daily scheduling decisions are made before the resolution of uncertainty. We formulate the problem as a two-stage stochastic mixed-integer program that minimizes the sum of the fixed cost of opening ORs and the expected overtime and surgeon idling cost. We describe some structural properties of our model and derive a set of widely applicable valid inequalities that make it possible to solve realistic-sized instances. We utilize stage-wise decomposition methods to solve our model, and we use our numerical results to quantify the value of capturing uncertainty and the benefit of pooling ORs, and to demonstrate the impact of parallel surgery processing on surgery schedules.
We then consider a stochastic multi-OR scheduling problem where the initial schedule is revised under perfect information at a prespecified rescheduling point during the surgical day. We use stage-wise and scenario-wise decomposition methods to solve our rescheduling model. By using our results, we estimate the value of rescheduling, and illustrate the impact of different surgery sequencing rules on this value.
This research is joint work with Brian Denton (Department of Industrial and Operations Engineering, University of Michigan), Todd Huschka (Department of Health Care Policy and Research, Mayo Clinic), and Andrew Schaefer (Department of Industrial Engineering, University of Pittsburgh).
Bio: Sakine Batun is an assistant professor in the Department of Industrial Engineering at METU. She received her B.S. and M.S. degrees in Industrial Engineering from METU, and her Ph.D. degree in Industrial Engineering from the University of Pittsburgh. Prior to joining the faculty at METU, she was a postdoctoral fellow at the Richard Ivey School of Business, University of Western Ontario. Her primary research interests are in decision making under uncertainty, with applications in healthcare delivery, medical decision making, production and maintenance scheduling.
Wireless mesh networks (WMNs) are communication networks that provide last mile Internet
access. They consist of radio nodes organized in a mesh topology that communicate via
wireless media in a multihop fashion. A few of these nodes (we call them gateways) have
wired connections to the Internet, and provide access for the whole network. Wireless
interference caused by simultaneous transmissions over the same frequency channel
complicates the design problems of WMNs. In order to avoid wireless interference, link
capacities are divided in time or frequency into transmission slots, and these should be
allocated to non-interfering transmissions. In this presentation, we consider the problem of
designing and operating a WMN, whose node demands increase at the same rate over time.
Our goal is to determine a gateway deployment plan that meets the demand at all times and
minimizes the total present value of the investment costs. We present an integer programming
formulation that jointly determines the locations of the gateway nodes, their deployment
times, the routing trees rooted at these gateways, and the transmission slot assignments to
wireless links. We determine tight upper and lower bounds on the optimal cost of this NP-
hard problem. We introduce valid inequalities and heuristic solution methods, and provide
numerical examples to evaluate their performances.
Kagan Gokbayrak received the B.S. degrees in mathematics and in electrical engineering from Bogazici University, Istanbul, in 1995, the M.S. degree in electrical and computer engineering from the University of Massachusetts, Amherst, in 1997, and the Ph.D. degree in manufacturing engineering from Boston University, Boston, MA, in 2001. From 2001 to 2003, he worked as a network planning engineer at Genuity, Inc., Burlington, MA. In addition to preparing demand forecasts and IP-layer capacity plans of the AS1 backbone, Dr. Gokbayrak played a key role in adapting emerging IP technologies. Since 2003, he has been a faculty member in the Industrial Engineering Department, Bilkent University, Ankara, Turkey. During his sabbatical leave in 2013, he was a visiting scholar at Massachusetts Institute of Technology and an adjunct professor at Boston University. His research interests include the optimization of discrete-event and hybrid systems, wireless mesh networks, and sensor networks, and computer simulation with applications to inventory, healthcare and manufacturing systems.
We characterize the Bayesian Nash equilibrium under two sealed-bid split-award
auction formats for awarding procurement contracts in an independent private values
environment with risk-neutral bidders.
These auction formats are straightforward multi-object extensions of the first-price sealed-bid auction. We find that, although the two auction mechanisms yield the same expected costs to the buyer, other aspects of the two models, including the equilibrium bidding strategies, differ in important ways. We also analyze the predictive accuracy of equilibrium strategies in the lab. Overall, the experiments suggest that risk-neutral Bayes-Nash equilibrium strategies serve as a surprisingly accurate model for human bidding behavior in split-award auctions and that strategic complexity is an unlikely explanation for deviations from the equilibrium strategy.
Kemal Goler is a researcher at the Deparment of Industrial Engineering at Bilkent. Prior to Bilkent, he worked as a principal research scientist at Hewlett- Packard Laboratories, Palo Alto,California, as an assistant professor of Economics at the University of Houston, Houston, Texas, and as visiting professor at Rice University, Houston, Texas and at Cleveland State University, Cleveland, Ohio. His research interests are in applied mechanism design - game theory, econometrics, and behavioral economics - with applications in economics and OR/OM. He holds a B.A. degree from Bogazici, an M.Sc. degree from Baylor University, both in Economics, and a Ph.D. degree in Social Science from Caltech.
There are various real life problems where equity concerns naturally arise and it is important
to address these concerns for the proposed solutions to be acceptable and implementable. In
this seminar, we discuss methods to incorporate two equity-related concerns into the decision
making process and seek equitable solutions to different operational research problems.
In the first part, we discuss a multi-criteria decision making approach to handle distributional equity concerns. We consider situations where we have person anonymity and quasi- concavity (convexity) conditions on the model of the decision maker’s (DM) preferences. For such situations, we provide an interactive solution process which takes the DM’s value judgments into account. We introduce substantial new theory and design an interactive ranking algorithm based on our theoretical findings.
In the second part, we discuss another equity-related concept: balance concerns in resource allocation decisions. These arise when the DM wants to achieve a desired balance in the allocation over different categories of entities. Unlike distributional equity concerns, balance concerns do not involve the anonymity assumption. We provide a bi-criteria framework to think about trading balance off against efficiency in resource allocation problems and propose optimization and heuristic algorithms for our bi-criteria allocation models. We illustrate our approach by applying it to a portfolio selection problem faced by a funding agency and to randomly generated large-sized problem instances to demonstrate computational feasibility.
Parts of this research are joint work with Prof. Alec Morton from the University of Strathclyde and Nikos Argyris from the University of Warwick.
Özlem Karsu is a PhD candidate at the Operational Research (Management Science) Group at the London School of Economics. She received her B.S and M.S. degrees from the Industrial Engineering Department of the Middle East Technical University, where she worked as a teaching assistant between 2008 and 2010. She also received an MPhil degree in Operational Research from the London School of Economics in 2011. Her research interests include inequity-averse decisions in operational research, interactive multi-criteria decision making approaches and applications of assignment problems.
Augmented Lagrangian duality provides zero duality gap and saddle point properties
for nonconvex optimization. On the basis of this duality, subgradient-like methods can
be applied to the (convex) dual of the original problem. These methods usually
recover the optimal value of the problem, but may fail to provide a primal solution.
We prove that the recovery of a primal solution by such methods can be characterized
in terms of (i) the differentiability properties of the dual function, and
(ii) the exact penalty properties of the primal-dual pair. We also connect the property
of finite termination with exact penalty properties of the dual pair. In order to
establish these facts, we associate the primal-dual pair to a penalty map. This map,
which we introduce here, is a convex and globally Lipschitz function, and its epigraph
encapsulates information on both primal and dual solution sets.
Regina Burachik is an Associate Professor at University of South Australia, and her current research interests are nonsmooth, convex, and nonconvex optimization. She has also extensive work in the development of algorithms for nonconvex minimization. Her PhD thesis introduced and analyzed solution methods for variational inequalities, the latter being a generalization of the convex constrained optimization problem. She recently co-authored the Springer book "Set-Valued Analysis and Monotone Mappings". Regina is an associate editor of several journals from the area of optimization, including SIAM Journal on Optimization, Applied Mathematics and Computations, Communications in Mathematical Analysis, TOP, and the Australian and New Zealand Journal for Industrial Mathematics. Regina enjoys learning, teaching, supervising and collaborating with others. Her main and only reason for doing math is pleasure.
Environmental responsibility has become an important ingredient to do business. The
increased awareness of customers about the environment and the governmental
regulations are pushing supply chain entities to minimize any negative influence of
their operations on the environment. Therefore, companies in today’s world have to
take joint responsibility with their suppliers for the environmental impact of their
actions. In this thesis, we study coordination and contracting in a two-echelon supply
chain (i.e., a buyer and a vendor) under three emission regulations; cap, tax and cap-
and-trade. We consider a setting with deterministic demand for a single product in the
infinite horizon. We model and solve the replenishment problems of the buyer and the
vendor assuming decentralized and centralized decision making approaches. Based on
analytical and numerical comparisons of the optimal decisions under the two
approaches, we conclude that coordination may not always be good for the
environment. We also propose some novel contractual agreements that involve carbon
credit sharing as a term, besides the traditional mechanisms such as quantity
discounts. We finally provide an extension and a preliminary analysis for the case of
Pairs Trading is a modest but persistent statistical arbitrage strategy. It is based on
identifying a pair of stocks whose prices are driven by the same economic forces, and
trade according to the spread between their prices. In this thesis, we study on a pairs
trading method which maximizes the profit per unit time. We identify the pairs using
cointegration analysis and build the method using Markov Chains. After constructing
the method, we examine its performance on both simulated and real data. We use
banking stocks from Istanbul Stock Exchange as real data.
Keywords: Wire Electro Discharge Grinding, Micro Tool Fabrication, Experimental Modeling, Process Modeling.
Fabrication of micro tools made from tungsten carbide and polysrystalline diamond is
a difficult and time consuming process. Quality of the tool directly affects the
dimensional integrity of the fabricated micro products. In this thesis, fabrication of
micro end mills using wire electro discharge grinding (WEDG) process, a variation of
electro discharge machining process, is considered. The advantage of this process is
that very small micro tools (less than 0.1 mm diameter) can be produced by eroding
the tool material through electrical discharges. It is preffered over traditonal grinding
process since no forces are transmitted to the tool body during fabrication. However,
it takes a very long time to fabricate micro tools with this method. Therefore, it is
important to understand the influence of process parameters on material erosion rate
in order to be able to model the process. In this study, the relationship between
process input parameters and process outputs (material erosion rate and surface
roughness) are investigated using experiments. A parametric formulation which
allows the estimation of tool fabrication time as a function of WEDG process
parameters and given tool geometry has been developed. The developed model can be
used in tool geometry design optimization studies.
Keywords: Wire Electro Discharge Grinding, Micro Tool Fabrication, Experimental Modeling, Process Modeling.
In this talk, we present our work in modifying classical models to address modern
computational trade-offs and behavioral observations. Most of this talk focuses on
time-related issues: instead of following the conventional compute-first-implement-
later approach, we develop methods to embed computation into implementation in
time-sensitive applications of well-known models such as TSP and VRP. We
demonstrate the effectiveness of our approach and the break-even analysis for its use.
We also present our tangential work on a tour length estimation model for random
graphs that compares favorably with the seminal result by Beardwood et al. (1959).
Finally, we discuss the motivation for our research in updating classical models based
on the results of recent behavioral studies.
This is joint work with Prof. Joel Sokol.
BIO Bahar Çavdar received her B.S. degree in Industrial Engineering with a minor in Computer Engineering from Middle East Technical University in 2009. She is currently a Ph.D. candidate at Georgia Institute of Technology. Her current research interests are in balancing computation and implementation time when solving real- time problems in time-sensitive applications and integrating human behavior into standard models.
One of the devastating consequences of natural and man-made disasters is the resulting debris, which
can be catastrophic for the economy, environment, and the health of affected communities. Despite its
importance, there is little existing work on quantitative decision support for debris management. In this
talk, we discuss the post-disaster debris clearance problem, in which the objective is to establish
connectivity of supply and demand for relief commodities in a timely manner by determining a
sequence for the clearance of debris-blocked roads under limited resource capacity.
In many real-life applications, accurate estimation of debris amounts is not possible due to lack of information or estimation methods. As clearance proceeds, the estimations are updated based on actual observations of the debris amounts, resulting in a trade-off between learning the network structure and greedily aiming to satisfy the demand. To solve this problem optimally, we develop a partially observable Markov decision process model. Due to intensive computational requirements of this approach, we also propose novel heuristics that enhance the efficiency of the search tree. Through computational experiments, we compare the proposed approaches to those that ignore the uncertainty in the debris amounts as well as those that mimic current practice. This research is joint work with Özlem Ergun and Pınar Keskinocak from Georgia Institute of Technology. Biographical Sketch
Melih Çelik is a Ph.D. candidate in the H. Milton Stewart School of Industrial and Systems Engineering and works at the Center for Health and Humanitarian Logistics at Georgia Institute of Technology. He received his B.S. and M.S. degrees in Industrial Engineering from Middle East Technical University in 2007 and 2009, respectively. He also holds an M.S. degree in Operations Research from Georgia Institute of Technology, which he received in 2012. His research interests include stochastic and online network design, humanitarian applications of operations research, warehouse logistics, and facility location and layout.
Mechanism design is a branch of economics and game theory where a principal
wishes to design a mechanism or a game that will elicit privately held information by
participants and achieve at the same time the goals of the principal. The Nobel Prize
in Economics 2007 was awarded to Hurwicz, Maskin and Myerson for their work in
The connections of the field to optimization are ubiquitous, and Myerson's seminal paper was published in Math. of OR. In this talk I intend to make a very elementary introduction to mechanism design following the recently published book ``Mechanism design a linear programming approach'' by R. Vohra, 2011, Cambridge Univ. Press. Key concepts will be introduced using an example from the theory of screening, the simplest sub-field of mechanism design where e.g., an agent sells an item to a buyer. While the item is of no value to the seller, it represents a value to the buyer, which is, however, unknown to the seller.
The seller wishes to design a mechanism that will motivate any potential buyer to reveal his valuation truthfully and at the same time maximize earnings from the sale. Under some suitable assumptions, the problem of mechanism design in this particular context is posed as a linear optimization problem. By connecting the problem to the theory of shortest paths and duality of linear programming, the optimal mechanism is derived.
Extensions and research problems are discussed.
Mustafa Rasim Kılınç -
The focus of this work is a computationally effective method for generating disjunctive cuts for convex MINLPs. The method relies on solving a sequence of cut- generating linear programs (CGLP). If the constraint functions have a special separability structure, then the polyhedral outerapproximation of the feasible region can be significantly improved by introducing auxiliary variables that separately approximate the elements of the constraint functions. These improved approximations can have significant positive benefits for linearization-based algorithms as well as the performance of the disjunctive cut generation method. The introduction of auxiliary variables can also improve the strength of the generated cuts. For instance, we have recently shown that, while the conic MIR proposed by Atamturk and Narayanan only uses an LP approximation to generate cuts for conic integer programming, the use of an extended formulation can allow the conic MIR to outperform cuts that use all the nonlinear relaxation. Our computational experiments compare the strength of this and other extended formulations for cutting plane generation and explore the strength of extensions of the conic MIR that use additional nonlinear information. This talk is based on joint work with S. Moderasi, J. Linderoth, J. Luedtke and J.P. Vielma.
The p-center problem is a relatively well known facility location problem that involves locating p
identical facilities on a network to minimize the maximum distance between demand nodes and
their closest facilities. The focus of the problem is on the minimization of the worst case service
time. This sort of objective is more meaningful than total cost objectives for problems with a time
sensitive service structure. A majority of applications arises in emergency service locations such
as determining optimal locations of ambulances, fire stations and police stations where the human
life is at stake. There is also an increased interest in p-center location and related location
covering problems in the contexts of terror fighting, natural disasters and human-caused disasters.
The p-center problem is NP-hard even if the network is planar with unit vertex weights, unit edge
lengths and with the maximum vertex degree of 3. If the locations of the facilities are restricted to
the vertices of the network, the problem is called the vertex restricted p-center problem; if the
faiclities can be placed anywhere on the network, it is called the absolute p-center problem. The
p-center problem with capacity restrictions on the facilities is referred to as the capacitated p-
center problem and in this problem, the demand nodes can be assigned to facilities with single or
multiple allocation strategies. In this thesis, the capacitated $p$-center problem under the multiple
allocation strategy is studied for the first time in the literature.
The main focus of this thesis is a modelling and algorithmic perspective in the exact solution of absolute, vertex restricted and capacitated p-center problems. The existing literature is enhanced by the development of mathematical formulations that can solve typical dimensions through the use of off the-shelf commercial solvers. By using the structural properties of the proposed formulations, exact algorithms are developed. In order to increase the efficiency of the proposed formulations and algorithms in solving higher dimensional problems, new lower and upper bounds are provided and these bounds are utilized during the experimental studies. The dimensions of problems solved in this thesis are the highest reported in the literature.
We study patient admission policies in acute care wards recognizing multiple patient
types with different medical characteristics. Recent studies have shown that patients
who need acute care are more effectively treated in specialized inpatient settings. The
benefits of such specialized care, however, might be offset by long boarding times at
the emergency department due to bed unavailability at the ward. This research is
inspired by the managerial challenges at the neurology ward of Montreal Neurological
Hospital (MNH), where the optimal care pathway for patients with neurological
diseases is particularly time-sensitive.
We model the patient admission and bed allocation problem using an average cost dynamic programming framework. By solving the dynamic program, we determine the dynamic admission policy that provides the best care for all patients in light of limited bed availability. We present an integrated solution approach, which combines queuing methods and approximate dynamic programming. We compare the performance of the proposed policy with that of other heuristic admission policies. Our approach also has some implications for similar general multi-class queuing problems. This is joint work with Saied Samiedaluie and Vedat Verter of McGill University, and Dan Zhang of the University of Colorado at Boulder.
In this study, we examine the coordinated replenishment and transportation functions
of a supply chain with N retailers and a warehouse in the presence of stochastic
demand characterized by a Poisson process. We consider two alternative
transportation practices with (i) in-house fleet with capacitated trucks and (ii)
contracts with a 3PL provider. We also consider two alternative replenishment
policies, namely the (Q,S) and (R,T) joint replenishment policies among the retailer.
For the models mentioned above we derive the operating characteristics and the
expected cost rates using the stochastic properties of the underlying processes. The
objective is to minimized the expected cost rates with respect to policy parameters.
Pros and cons of alternative replenishment policies and transportation options are
discussed via numerical examples.
Finding optimal times to sell a house, to retire, or to hire or fire employees requires
solving optimal stopping problems. In this talk, we will show that optimal stopping
problems of linear diffusions can be directly solved by means of the generalized
concave functions and illustrate the solution method on several numerical examples.
According to World Food Programme (WFP) and UNICEF, “Humanitarian logistics
is the process of planning, implementing, controlling the efficient, cost effective flow
and storage of goods and materials, as well as related information, from point of
origin to point of consumption for the purpose of meeting the end beneficiary’s
requirements.” In addition to man-made and natural disasters, there are also long-term
issues due to lack of basic necessities, including shelter, water, food, and access to
healthcare where humanitarian actions are required. Given the complexity of the
problems faced, there is a tremendous need and potential for academic research in this
area. In this tutorial we will first provide an overview of some of the current research
areas in humanitarian logistics and continue with some research examples from
different categories of the humanitarian logistics applications including mobile blood
collection, shelter area location, and debris management.
We examine how a firm can induce higher environmental quality from a supplier. A
high environmental quality of the product stimulates the final demand whereas it
requires a lump-sum investment and results in a unit cost increase for the supplier.
Under such a setting, we examine three dynamics that the firm can leverage to entice
the supplier to invest: revenue sharing, cost sharing, and supplier competition. We
find that supplier competition is the most effective across the three and cost sharing
produces higher investment levels when the firm is willing to take on bigger shares of
the investment cost. Both may surpass the centralized solution investment level under
the right circumstances. Our research is based on our work with an NGO as it markets
to industries a tool to safely share chemical information.
Bio: Özgen Karaer is an assistant professor in the Department of Industrial Engineering at METU. She received her B.S. degree (2002) from Department of Industrial Engineering at METU, and her M.S. and Ph.D. (2008) degrees from Department of Management Science and Engineering at Stanford University. Prior to joining METU, she worked as a senior consultant and consequently a manager in the Operating Strategy department at Gap Inc. headquarters in San Francisco for about four years. During her tenure at Gap Inc., she worked on various projects involving replenishment prioritization, promotion hindsighting, demand planning, store inventory allocation, merchandise-test store selection, store performance measurement, and RFID. Her research is in strategic interactions in supply chains and retail management. (
This research is joint work with Tim Kraft (University of Virginia) and John Khawam (Google Inc).
Optimization of signal structures and detectors is studied for communications systems
under power constraints in the presence of additive noise. First, it is observed that the
optimal signal for each symbol can be characterized by a discrete random variable
with at most two point masses. Then, optimization over all possible signal
distributions and corresponding maximum a posteriori probability (MAP) decision
rules are considered. It is shown that the optimization problem can be simplified into
an optimization over a number of signal parameters instead of functions, which can
be solved via global optimization techniques. Improvements that can be obtained via
the joint design of the signaling and the detector are illustrated via examples. Finally,
related problems such as optimal detector randomization and channel swithing are
(Joint work with Hui Lin)
Influenza (flu) is a highly contagious, acute, respiratory viral disease. Seasonal epidemics annually impact 5-15% of the world's population, resulting in 3-5 million cases of severe illness and up to 500,000 deaths. Seasonal flu epidemics caused by antigenic drifts and the high rate of influenza transmission require annual updates in the flu shot composition. The World Health Organization recommends which strains of influenza to include in each year’s vaccine based on surveillance data and epidemiological analysis.
There are two critical decisions regarding the flu shot. The first is its composition; currently three strains comprise the flu shot, and this decision affects its effectiveness against the upcoming flu epidemic. The second decision is the timing of the composition decision. If the strains are selected too early, there is a greater likelihood that there will be a mismatch. However, delaying the decision may leave a short time for manufacturing, resulting in delays or shortages in the flu shot supply.
Optimization models have not yet made a large impact on public health. Thus far, the design and timing of the flu shot have been made in an ad hoc manner.=A0 We propose a multi-stage stochastic mixed-integer program to address the optimal design of the annual flu shot design and calibrate our model with real-life data. Risk evaluation is taken into account by incorporating risk measures in the objective function. Multi-stage stochastic mixed-integer programs are very difficult optimization problems, but have many potential applications. The techniques we have developed are likely to lead to algorithmic enhancements that can be broadly applied.
This is joint work with Osman Ozaltin of the University of Waterloo, and Oleg Prokopyev and Mark Roberts of the University of Pittsburgh.
Although the airline industry is very successful in implementing optimization tools for the planning and scheduling of their resources, they are facing significant problems to deal with the disruptions in their daily operations due to unexpected aircraft maintenance requirements or undesirable weather conditions. In this presentation, I first propose a novel rescheduling optimization model that explicitly accounts for delay as well as fuel burn and CO2 emission costs by incorporating cruise speed control into airline disruption management. Flight time controllability and nonlinear delay, fuel burn and CO2 emission cost functions complicate the aircraft recovery problem significantly. We utilize the recent advances in conic mixed integer programming and propose a strengthened formulation in mitigating this difficulty so that our proposed nonlinear optimization model could be used by operations controllers to deal with disruptions in real time in an optimal manner instead of relying on heuristic approaches.
On the other hand, robust airline schedules could be viewed as to generate flight schedules that are likely to minimize passenger delay. Airlines usually add an additional time, e.g. schedule padding, to scheduled gate-to-gate flight times to make their schedules less susceptible to the variability and disruptions. There is a critical tradeoff between any kind of buffer time and daily aircraft productivity. The uncertainty in flight times are modeled through chance constraints on passenger connection service levels, which are expressed using second order conic inequalities. Aircraft speed control is a practical alternative to inserting idle times into schedules. Overall, 60% saving of idle time costs could be obtained by a 2% increase in fuel costs. Our computational study shows that exact solutions can be obtained by commercial solvers such as CPLEX in seconds for a single hub schedule and in minutes for a 4-hub daily schedule of a major U.S. carrier.
This is a joint work with Prof. Alper Atamturk from UC Berkeley, Prof. Sinan Gurel from METU, and A. Serasu Duran from Northwestern University.
M. Selim Akturk is Professor of Industrial Engineering at Bilkent University, Turkey. During his sabbatical leaves, he worked as a visiting professor at McGill University, Carnegie Mellon University and University of California at Berkeley. His current research interests include production management systems, aviation applications, and scheduling. The first part of this presentation received the `Best Technical Paper` award in 51st Annual Symposium of Airline Group of the International Federation of Operational Research Societies (AGIFORS) in 2011.
We study the problem of project selection and resource allocation in a multi-stage new product development (NPD) process with stage-dependent resource constraints. We model the problem asan infinite-horizon Markov Decision Process, specifically under the discounted cost criterion. Each NPD project undergoes a different experiment at each stage of the NPD process; these experiments generate signals about the true nature of the project. Experimentation times are independent and exponentially distributed. After each experiment, beliefs about the outcome of each project are updated according to a Bayesian rule. A project thus becomes differentiated through its signals, and its category is determined by all its available signals. The state of the system is described by the number of projects in each category. A control policy specifies, given the system state, how to utilize the resources at each stage, i.e., the projects (i) to experiment with at each stage, and (ii)to terminate.
We characterize the optimal control policy according to a new strategy, state-dependent non-congestive promotion (SDNCP), for two different special cases of the general problem: (a) when there are multiple uninformative experiments; or (b) when there is a single informative experiment and projects are not terminated. An SDNCP policy implies that, at each stage, it is optimal to advance the project with the highest expected reward to the next stage if and only if the number of projects in each successor category is less than a state-dependent threshold. We further find that threshold values decrease in a non-strict sense as a later stage becomes more congested or as an earlier stage becomes less congested. These analytical findings uncover the role congestion plays in optimal policies. Our computational results support the outstanding performance of SDNCP as a heuristic for the general NPD problem.
Emre Nadar is currently an Assistant Professor of Industrial Engineering at Bilkent University. He received his Ph.D. in Operations Management from the Tepper School of Business at Carnegie Mellon University in 2012. His research interests include Markov decision processes, queuing theory, supply chain management, and ustainable operations. He received several awards, including the first place award from the POMS Supply Chain College student paper competition and the Gerald L. Thompson doctoral dissertation award from Carnegie Mellon University. He was a finalist in both the INFORMS George Nicholson student paper competition and the MSOM student paper competition.
Computation is everywhere nowadays. It has changed not only our daily lives but our academic practice as well. It seems hard to believe that all these changes arose from Alan Turing's simple conception of functions unheard of before: step by step execution of an algorithm over some kind of representation. Turing seems to have begun his scientific games as a child with quite a personal way of doing things, which is very much reflected later in his life in his handling of apparently eclectic set of problems. Some connections between the problems will be highlighted in the seminar.
Cem Bozsahin is professor of cognitive science at the Informatics Institute, METU. He holds a PhD in computer science from Arizona State University, and received his BS and MS from Industrial Engineering at METU. His research topics are nature of constraints on cognition and investigation of natural grammars.
We use Markov risk measures to formulate a risk-averse version of the undiscounted total cost problem for a transient controlled Markov process. We derive risk-averse dynamic programming equations and we show that a randomized policy may be strictly better than deterministic policies, when risk measures are employed. We propose two solution methods, value and policy iteration, and analyze their convergence. We illustrate the results on an optimal stopping problem, an organ transplant problem, and a credit limit control problem.
Ozlem Cavus is currently an Assistant Professor of Industrial Engineering at Bilkent University. She received her B.S. and M.S. degrees in Industrial Engineering from Bogazici University in 2004 and 2007, respectively, and the Ph.D. degree in Operations Research from Rutgers Center for Operations Research (RUTCOR) at Rutgers University in 2012. Her research interests include risk-averse dynamic optimization, Markov decision processes, dynamic programming, and stochastic optimization under dominance constraints.
Breast cancer is the most common non-skin cancer and the second leading cause of cancer-death in US women. Although mammography is the most effective modality for breast cancer diagnosis, it has several potential risks, including high false positive rates, which are not very rare. Therefore, the balance of benefits and risks, which dependon personal characteristics, is critical in designing a mammography screeningschedule. In contrast to prior research and existing guidelines which consider population-based screening recommendations, we propose a personalized mammography screening policy based on the prior screening history and personal riskcharacteristics of women.
We formulate a finite-horizon partially observable Markov decision process (POMDP) model for this problem. Our POMDP model incorporates two methods of detection (self or screen), age-specific unobservable disease progression, and age-specific mammography test characteristics. We use a validated micro-simulation model based on real data in estimating the parameters and solve this POMDP model optimally for individual patients. Our results show that our proposed personalized screening schedules outperform the existing guidelines with respect to the total expected quality-adjusted life years, while significantly decreasing the number of mammograms. We further find that the mammography screening threshold risk increases with age. We derive several structural properties of the model, including the sufficiency conditions that ensure the existence of a control-limit policy.
Oguzhan Alagoz is currently a visiting associate professor at Bilkent University and associate professor of Industrial and Systems Engineering at the University of Wisconsin-Madison. In addition, he is an associate professor at the Department of Population Health Sciences and serves as the director of National Institute of Health (NIH)-funded Institute for Clinical and Translational Research-Simulation Center at UW-Madison School of Medicine and Public Health. His research interests include stochastic optimization, medical decision making, completely and partially observable Markov decision processes, simulation, risk-prediction modeling, health technology assessment, and scheduling. He is on the editorial boards of Operations Research, IIE Transactions, Medical Decision Making, and IIETransactions on Healthcare Engineering. He has received various awards including a CAREER award from National Science Foundation (NSF), Dantzig Dissertation Honorable Mention Award from INFORMS, 2nd place award from INFORMS Junior Faculty Interest Group best paper competition, best paper award from INFORMS Service Science Section, best poster award from UW Carbone Comprehensive Cancer Center. He has been the principal investigator and co-investigator on grants more than $3 million funded by NSF and the National Cancer Institute of NIH. He is a member of INFORMS, IIE, SMDM, and CISNET.
In this thesis we consider two problems in the newsvendor setting with multiple inputs, under a carbon emission constraint and non linear production functions. In the first problem, we assume a strict carbon cap and find the optimal production quantity and input allocation that will maximize the expected profit under this constraint. In the second problem, we consider an emission trading scheme where an advance purchase of carbon emission quota is made at an initial price before the random demand is realized. When the demand is realized and new carbon trade prices are revealed , it is possible to buy additional permits or to sell an excess amount. The aim is to decide on the optimal allocation of the inputs as well as the carbon trading policy so as to maximize the expected profit. In both problems, the production quantity is linked to multiple inputs via the Cobb-Douglas and Leontieff production functions. Optimal policy structure is derived and numerical examples are provided.
Keywords: Production, Newsvendor Problem, Inventory Management, Production Function, Multiple Inputs, Carbon Emission, Carbon Cap, Emission Trading, Cobb-Douglas, Leontieff.
A car passes through press, body-shell, painting and assembly stages during its manufacturing process. Due to the increased competition among car manufacturers, they aim to continuously advance and improve their processes. In this study, we analyze planning operations for the production of front/back and left/right doors in body-shell department of Bursa Oyak-Renault factory and propose a decision support system to improve their planning processes. In this study, we propose developed four different mathematical models and two heuristics approaches which decrease the current costs of the company particularly with respect to inventory carrying and setup perspectives.In the body shell department of the company, there are two parallel manufacturing cells which produces doors to be assembled on the consumption line. The effective planning and scheduling of the jobs on these lines requires solving the problem of integrated machine-scheduling and inventory planning subject to inclusive eligibility constraints and sequence independent setup times with job availability in flexible manufacturing cells of the body shell department. The novelty in the models lie in the integration of inventory planning and production scheduling decisions with the aim of streamlining operations of the door manufacturing cells with the consumption line. One of the proposed heuristic approaches is Rolling Horizon Algorithm (RHA) which divides the planning horizon into sub-intervals and solves the problem by rolling the solutions for sub-intervals. The other proposed algorithm is Two-Pass Algorithm which divides the planning horizon into sub-intervals but unlike RHA, it updates job list by adding new jobs or deleting existing ones.These approaches are implemented with Gurobi optimization software and Java programming language and applied within a decision support system that supports daily planning activities.
Return of a sold item by a customer becomes tremendously common situation in many industries. Increase in the amount of returned items promotes return information to be a critical factor for inventory control. Undoubtedly another critical parameter for an inventory system is the length of the review period. Effect of the review period or length of the time-bucket is amplified with returned items, because available return information at a decision point is related to the frequency of the review. In this study, we analyze the effects of these two parameters over a multi-period inventory system where the length of a time horizon is fixed. Dynamic programming approach is used to calculate the optimal inventory positions. In dynamic programming, it is assumed that a fixed proportion of sold items are returned. Computational results are obtained to compare the effects of return information under different return proportions and period lengths. These results are used to conduct various analyses to explore the level of the advantage gained by using return information.
Wireless Mesh Networks (WMNs) are wireless communication networks that consist of a finite number of radio nodes, in which a predetermined subset of nodes, called gateways, have wired connections to the Internet and the remaining ones transmit their traffic to a gateway node through the wireless media in a multi-hop fashion through the other nodes.
Wireless communication signals that propagate simultaneously within the same frequency band may interfere with one another at a receiving node and may therefore prevent successful transmission of data. In order to circumvent this problem, nodes on the network can be configured to receive and send signals in different time slots and through different frequency bands. Therefore, a transmission slot can be defined as a pair of a certain frequency band and a specific time slot. In addition, by adjusting the power level of a radio node, its transmission range can be modified.
Given a wireless mesh network with fixed node locations, demand rate at each node, and maximum power level for each node, we study the problem of carrying the traffic of each node to the Internet through the network. Our goal is to allocate capacities in proportion to the demand of each node in such a way that the minimum ratio is maximized. We propose a mixed integer linear programming (MILP) formulation to select a given number of gateway locations from among the nodes in the network, to determine the routing of the traffic of each node through the gateway nodes, to assign transmission slots to each node in order to ensure no interference among wireless signals, and to determine the transmission power levels. In our study, we adopt the more realistic physical interference model.
MILP formulation becomes computationally infeasible for larger instances, we propose several heuristic methods that can quickly find reasonably good solutions. The proposed heuristics simply approaches the problem in two steps. In both steps, easier MILP problems are solved. In the first step, a relaxed version of the proposed MILP is used to make routing decisions. Second step performs the transmission slot assignment with given results of the first step. We evaluate the performance of our methods on a number of data sets.
Keywords: Wireless Mesh Networks, Integer Programming, Gateway Selection, Routing, Transmission Slot Assignment.
A recent trend in the manufacturing industry is the production of high value added micro parts which are mainly used in biomedical and electronics industries. It is crucial to develop high precision machine tools and operate them with high efficiency through an intelligent selection of process parameters. However, the knowledge base for micro machining operations is quite limited compared to macro scale machining processes and needs to be further developed. This study focuses on a micro milling process that is capable of producing micro parts with three dimensional features made from engineering materials.
In this thesis, modeling and optimization of macro scale turning and micro scale milling operations have been considered. A well known multi pass turning optimization problem from the literature is used to benchmark the performance of the Particle Swarm Optimization (PSO) technique against nonlinear optimization algorithms. It is shown that acceptable results can be obtained in a short time through PSO. Then, micro scale milling operation is thoroughly investigated through experimental techniques where the influences of machining parameters on the process outputs (machining forces, surface quality, and tool life) have been investigated and factors affecting the process outputs are identified. A minimum unit cost optimization problem is formulated based on the pocketing operation, and machining strategies are proposed for different machining scenarios using the PSO approach. It is shown that significant reductions in machining time and unit cost can be obtained through process optimization.
In this thesis, we study a two stage flow shop problem with a single server. All jobs are available for processing at time zero.Processing of a job is preceded by a sequence independent setup operation on both machines. The setup and processing times of all jobs on the two machines are given. All setups are performed by the same server which can perform one setup at a time. Setups cannot be performed simultaneously with job processing on the same machine. Once the setup is completed for a job, processing can automatically progress without any further need for the server. Setup for a job may start on the second machine before that job finishes its processing on the first machine. Preemption of setup or processing operations is not allowed. A job is completed when it finishes processing on the second machine.The objective is to schedule the setup and processing operations on the two machines in such a way that the total completion time is minimized. This problem is known to be strongly NP-hard. We propose a new mixed integer programming formulation for small-sized instances and Variable Neighborhood Search (VNS) mechanisms for larger problems. We also develop several lower bounds to help assess the quality of heuristic solutions on large instances for which optimum solutions are not available. Experimental resultsindicate that the proposed heuristics provide reasonably effective solutions in a variety of instances and they are very efficient in terms of computational requirements.
Keywords: machine scheduling, flow shop, single setup server, total completion time, variable neighborhood search.
In this seminar we will present two studies that are related to mobile distribution network design. Main objective of the first study is to provide a decision support tool that can help plan ammunition distribution on the battlefield. We aim to speed up the solution time with the second study by providing a different mathematical formulation and developing a heuristic.
In the first study, we demonstrate through an extensive literature review that the existing models are not capable of handling the specifics of the ammunition distribution. To this end, we propose a novel three-layer commodity-flow location routing formulation that distributes multiple products, respects hard time windows, allows demand points to be supplied by more than one vehicle or depot, and locates facilities at two different layers. We derive several valid inequalities to speed up the solution time of our model, illustrate the performance of the model in several realistically sized scenarios, and report encouraging results. Finally, we introduce a dynamic model that designs the distribution system in consecutive time periods for the entire combat duration.
In the second study, we propose a different formulation approach than the common arc-based product-flow (Arc-BPF) approach in the literature. We associate product amounts to the nodes of the network resulting in a node-based product-flow (Node-BPF) formulation. Our main objective is to develop LRP models with fewer constraints and variables, which can be solved more efficiently. To introduce the proposed approach, we reformulate the ammunition distribution system (a complex four-index Arc-BPF LRP model) as a three-index Node-BPF model, which computationally outperforms the former. We then introduce a heuristic method.
This is a study on robust airline scheduling where flight block times are considered in two parts as cruise time and non-cruise time. Cruise times are controllable and non-cruise times are random variables. Cruise time controllability is used together with idle time insertion to handle uncertainty to guarantee passenger connection service levels while ensuring minimum costs. The nonlinearity of these cost functions are handled by representing them via second order conic inequalities. The uncertainty in non-cruise times are modeled through chance constraints on passenger connection service levels, which are expressed using second order conic inequalities using the closed form equations. Congestion levels of origin and destination airports are used to decide variability for each flight. Computational study shows exact solutions can be obtained by commercial solvers in seconds for a single hub schedule and in minutes for a 4-hub daily schedule of an US carrier.
Blood transfusion is one of the most critical operations in various medical interventions. Currently, the only authorized way of securing the required blood for transfusion is through voluntary donations. For this reason, reorganizing blood donation operations to create an operable and efficient system is of utmost importance. Practice in several countries suggests that well designed mobile blood donation mechanisms have an important role in blood collection systems.
In this study, a mobile blood collection system is designed for Turkish Red Crescent (TRC) to increase blood collection levels. This design also takes into account operational costs as a second objective so as to aim the collection of large amounts of blood at reasonable cost. In the current system, TRC has bloodmobiles that perform dedicated tours to certain activities (fairs, college fests etc.), but at the end of each day, they bring the collected blood to a designated depot to prevent its spoilage. Considering blood’s considerably short shelf-life of 24 hrs, these dedicated tours may seem justifiable yet they are not efficient in terms of logistics costs. The proposed system consists of classic bloodmobiles and a new vehicle – called the shuttle – which visits the bloodmobiles in the field and transfers the collected blood to the blood centers, so that bloodmobiles can continue their tours without having to make daily returns to the depot.
A mathematical model is developed to determine the stops of bloodmobiles, the duration of each visit as well as the tours of the bloodmobiles and the shuttle. In the literature, a study that covers all these decisions does not exist. Therefore, a new extension of SVRP is defined, called SVRP with Integrated Tours. Also, a 2-stage IP based heuristic algorithm is developed for the same problem. The performances of these methodologies are tested on the data set obtained from past blood donation activities in Ankara. In addition, GIS data of the European part of Istanbul is used as a constructed test case. The Pareto set of optimum solutions is generated based on blood amounts and logistics costs, and finally a sensitivity analysis on some important design parameters is conducted.
A Poisson process changes its rate at an unknown and unobservable time from one known value to another known value. Detecting the change time as quickly as possible in an optimal way is described in literature as the Poisson disorder problem. In our study, we provide a more realistic generalization of the disorder problem for Poisson process by introducing fixed and continuous costs for being able to observe the arrival process. As a result, in addition to finding the optimal alarm time, we also characterize an optimal way of observing the arrival process. We illustrate the structure of the solution spaces with the help of numerical examples.e
Organ transplantation is one of the fundamental and effective treatment techniques for the patients who have critical health problems. However, while 3,930 organs were transplanted to the patients in 2011, there still exist 20,954 people waiting for a suitable organ as of April 2012 in Turkey.Even though the exact numbers are different; the situation of well developed countries like USA is not very different in terms of organ donation and patient ratio. Thus; matching - defined as finding the best recipient for a donated organ- is very crucial for the overall organ transplantation process. There are mainly two different ways of matching in the applications: centralized and hierarchical method. In the centralized method, all patients and donors are monitored and matching is coordinated centrally. In the hierarchical method, the matching process is coordinated via a bottom-up hierarchy. The application in Turkey is also hierarchical, coordinated by 9 regional coordination centers and one national coordination center. Due to the nature of the matching application in Turkey, the cluster of each regional coordination center is crucial. There are many dynamics of the transplantation process like cold ischemia time -the duration that the organ survives without blood circulation-, operation times and specialized hospitals and teams.
In this thesis, we study the organ transplantation logistics mainly focusing on the Turkish application. We provide mathematical models that consider the problem specific requirements like ischemia time. We also consider two-mode transportation since airplanes or helicopters are also used widely in organ transportation. Finally, we also developed a simulation model to observe the hierarchical nature of the system and to evaluate the performance of the mathematical model outputs. Both mathematical model and simulation model outcomes based on Turkish datawere compared with actual regional coordination center locations of Turkey.
In this study, the location of disaster response and relief facilities in Istanbul is investigated in view of a possible earthquake. The problem under study is the problem of determining shelter locations and evacuation routes to be used in the aftermath of a possible earthquake. The objective is to evacuate people in limited time from their geographical locations to shelter locations via the street network of metropolitan Istanbul to minimize the average time spent on the way while obeying possible constraints on maximum allowed path lengths, shelter capacities, and road capacities. We propose two mathematical models for this purpose. Model 1 is a path based model in which possible paths that satisfy certain length restrictions are a priori determined by preprocessing the network data. Model 2 is an arc based model that uses the network data directly and creates the paths as a byproduct of the solution. Both models ensure that each demand node is served by at least one shelter node in accordance with road and shelter capacities. We examine the effects of the number of shelters to be opened, capacity restrictions on shelters as well as additional capacity restrictions on streets of the network that impose upper bounds on the total traffic that can be routed through a given street in a maximum allowed evacuation time. Related “what if” analysis is also provided to study how the evacuation process and evacuees are affected if critical arcs or nodes of the network are disabled.
A Wireless Sensor Networks (WSN) can be deployed in inhospitable environments,
difficult-to-reach terrains, and wild habitats such as war zones, oceans, the poles,
deserts, and rain forests to monitor and observe natural, environmental, and climate-
related phenomena including temperature, pressure, humidity, vibration, light,
chemical and biological agents, etc. The most distinguishing characteristic of a WSN
is the fact that its sensors have finite and non-renewable energy resources. In this talk,
we consider integrated topology control and routing issues in data gathering WSNs
and address the problem of prolonging network lifetime until next sensor deployment.
We suggest a policy to be employed in each period of a deployment cycle and
computationally examine its efficiency in prolonging network lifetime against two
other commonly employed approaches. We present mathematical models and a
general efficient solution procedurethat is applicable to all the three policies. We
determine that the proposed policy has attractive properties and exhibits very good
performance in extending network lifetime. (Research supported by National Science
Foundation under grant CMMI-0428831).
(Joint work with Hui Lin)