Search Results

Search found 21589 results on 864 pages for 'primary key'.

Page 649/864 | < Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >

  • Google Charts: Bar chart labels are reversed

    - by True Soft
    I create dinamically a chart for a website. I have a key/value map, I sort the values descending, and then create the url: http://chart.googleapis.com/chart? chs=400x200&cht=bhs&chbh=a&chdlp=l&chg=25,0&chma=0,0,0,5&chtt=Chart+test& chxr=0,0,8,1&chds=0,8&chxt=t,y& chd=t:8,5,3& chxl=1:|Label_8|Label_5|Label_3 The values are set by chd=t:8,5,3, and the labels are set by chxl=1:|Label_8|Label_5|Label_3. However, in the chart image the labels are reversed. I searched the documentation, but I didn't get why it is like this. Is it because I didn't set a value correctly, or is this the desired functionality? I could reverse the label texts in chxl from code to be displayed how I want. Is this the right way?

    Read the article

  • Can I have a shell alias evaluate a history substitution command?

    - by Brandon
    I'm trying to write an alias for cd !!:1, which takes the 2nd word of the previous command, and changes to the directory of that name. For instance, if I type rails new_project cd !!:1 the second line will cd into the "new_project" directory. Since !!:1 is awkward to type (even though it's short, it requires three SHIFTed keys, on opposite sides of of the keyboard, and then an unSHIFTed version of the key that was typed twice SHIFTed), I want to just type something like cd- but since the !!:1 is evaluated on the command line, I (OBVIOUSLY) can't just do alias cd-=!!:1 or I'd be saving an alias that contained "new_project" hard-coded into it. So I tried alias cd-='!!:1' The problem with this is that the !!:1 is NEVER evaluated, and I get a message that no directory named !!:1 exists. How can I make an alias where the history substitution is evaluated AT THE TIME I ISSUE THE ALIAS COMMAND, not when I define the alias, and not never? (I've tried this in both bash and zsh, and get the same results in both.)

    Read the article

  • Having trouble using 'AND' in CONTAINSTABLE SQL SERVER FULL TEXT SEARCH

    - by Joshua
    I've been using FULL-TEXT for awhile but I cannot seem to get the most relevant results sometimes. If I have an field with something like "An Overview of Pain Medicine 5/12/2006" and a user types "An Overview 5/12/2006" So we create a search like: '"An" AND "Overview" AND "5/12/2006"' - 0 results (bad) '"Overview" AND "5/12/2006"' - 1 result (good) The CONTAINSTABLE portion of my query: FROM ce_Activity A INNER JOIN CONTAINSTABLE(View_Activities,(Searchable), @Search) AS KeyTbl ON A.ActivityID = KeyTbl.[KEY] "Searchable" is a field contains the activity title, and start date(converted to string) in one field so it's all search friendly. Why would this happen?

    Read the article

  • Iterating over a HashMap with JSTL, always getting last Object

    - by hkansal
    Hello, I have a Map, which I tried to iterate over with JSTL. Somehow I could not accomplish even this basic task. I also referred the questions here and here, but that is what I am already doing, still no success. What I tried to do was: <c:forEach items="${myMap}" var="myEntry"> ${myEntry.key} + ${myEntry.value} </c:forEach> But I always get the last object. Maybe I am missing something trivial, please advise. Thank You

    Read the article

  • Are keys and values of %INC platform-dependent or not?

    - by codeholic
    I'd like to get the full filename of an included module. Consider this code: package MyTest; my $path = join '/', split /::/, __PACKAGE__; $path .= ".pm"; print "$INC{$path}\n"; 1; $ perl -Ipath/to/module -MMyTest -e0 path/to/module/MyTest.pm Will it work on all platforms? perlvar The hash %INC contains entries for each filename included via the do, require, or useoperators. The key is the filename you specified (with module names converted to pathnames), and the value is the location of the file found. Are these keys platform-dependent or not? Should I use File::Spec or what? At least ActivePerl on win32 uses / instead of \. Update: What about %INC values? Are they platform-dependent?

    Read the article

  • Linq to sql add/update in different methods with different datacontexts

    - by Kurresmack
    I have to methods, Add() and Update() which both create a datacontext and returns the object created/updated. In my unit test I call first Add(), do some stuff and then call Update(). The problem is that Update() fails with the exception: System.Data.Linq.DuplicateKeyException: Cannot add an entity with a key that is already in use.. I understand the issue but want to know what to do about it? I've read a bit about how to handle multiple datacontext objects and from what I've heard this way is OK. I understand that the entity is still attached to the datacontext in Add() but I need to find out how to solve this? Thanks in advance

    Read the article

  • The JavaOne 2012 Sunday Technical Keynote

    - by Janice J. Heiss
    At the JavaOne 2012 Sunday Technical Keynote, held at the Masonic Auditorium, Mark Reinhold, Chief Architect, Java Platform Group, stated that they were going to do things a bit differently--"rather than 20 minutes of SE, and 20 minutes of FX, and 20 minutes of EE, we're going to mix it up a little," he said. "For much of it, we're going to be showing a single application, to show off some of the great work that's been done in the last year, and how Java can scale well--from the cloud all the way down to some very small embedded devices, and how JavaFX scales right along with it."Richard Bair and Jasper Potts from the JavaFX team demonstrated a JavaOne schedule builder application with impressive navigation, animation, pop-overs, and transitions. They noted that the application runs seamlessly on either Windows or Macs, running Java 7. They then ran the same application on an Ubuntu Linux machine--"it just works," said Blair.The JavaFX duo next put the recently released JavaFX Scene Builder through its paces -- dragging and dropping various image assets to build the application's UI, then fine tuning a CSS file for the finished look and feel. Among many other new features, in the past six months, JavaFX has released support for H.264 and HTTP live streaming, "so you can get all the real media playing inside your JavaFX application," said Bair. And in their developer preview builds of JavaFX 8, they've now split the rendering thread from the UI thread, to better take advantage of multi-core architectures.Next, Brian Goetz, Java Language Architect, explored language and library features planned for Java SE 8, including Lambda expressions and better parallel libraries. These feature changes both simplify code and free-up libraries to more effectively use parallelism. "It's currently still a lot of work to convert an application from serial to parallel," noted Goetz.Reinhold had previously boasted of Java scaling down to "small embedded devices," so Blair and Potts next ran their schedule builder application on a small embedded PandaBoard system with an OMAP4 chip set. Connected to a touch screen, the embedded board ran the same JavaFX application previously seen on the desktop systems, but now running on Java SE Embedded. (The systems can be seen and tried at four of the nearby JavaOne hotels.) Bob Vandette, Java Embedded Architect, then displayed a $25 Rasberry Pi ARM-based system running Java SE Embedded, noting the even greater need for the platform independence of Java in such highly varied embedded processor spaces. Reinhold and Vandetta discussed Project Jigsaw, the planned modularization of the Java SE platform, and its deferral from the Java 8 release to Java 9. Reinhold demonstrated the promise of Jigsaw by running a modularized demo version of the earlier schedule builder application on the resource constrained Rasberry Pi system--although the demo gods were not smiling down, and the application ultimately crashed.Reinhold urged developers to become involved in the Java 8 development process--getting the weekly builds, trying out their current code, and trying out the new features:http://openjdk.java.net/projects/jdk8http://openjdk.java.net/projects/jdk8/spechttp://jdk8.java.netFrom there, Arun Gupta explored Java EE. The primary themes of Java EE 7, Gupta stated, will be greater productivity, and HTML 5 functionality (WebSocket, JSON, and HTML 5 forms). Part of the planned productivity increase of the release will come from a reduction in writing boilerplate code--through the widespread use of dependency injection in the platform, along with default data sources and default connection factories. Gupta noted the inclusion of JAX-RS in the web profile, the changes and improvements found in JMS 2.0, as well as enhancements to Java EE 7 in terms of JPA 2.1 and EJB 3.2. GlassFish 4 is the reference implementation of Java EE 7, and currently includes WebSocket, JSON, JAX-RS 2.0, JMS 2.0, and more. The final release is targeted for Q2, 2013. Looking forward to Java EE 8, Gupta explored how the platform will provide multi-tenancy for applications, modularity based on Jigsaw, and cloud architecture. Meanwhile, Project Avatar is the group's incubator project for designing an end-to-end framework for building HTML 5 applications. Santiago Pericas-Geertsen joined Gupta to demonstrate their "Angry Bids" auction/live-bid/chat application using many of the enhancements of Java EE 7, along with an Avatar HTML 5 infrastructure, and running on the GlassFish reference implementation.Finally, Gupta covered Project Easel, an advanced tooling capability in NetBeans for HTML5. John Ceccarelli, NetBeans Engineering Director, joined Gupta to demonstrate creating an HTML 5 project from within NetBeans--formatting the project for both desktop and smartphone implementations. Ceccarelli noted that NetBeans 7.3 beta will be released later this week, and will include support for creating such HTML 5 project types. Gupta directed conference attendees to: http://glassfish.org/javaone2012 for everything about Java EE and GlassFish at JavaOne 2012.

    Read the article

  • How to validate HTTP request headers before receiving request body using WCF

    - by anelson
    I'm implementing a REST service using WCF which will be used to upload very large files. The HTTP headers in this request will communicate information which will be validated prior to allowing the upload to proceed (things like permissions, available disk space, etc). It's possible this validation will fail resulting in an error response. I'd like to do this validation prior to the client sending the body of the request, so it has a chance to detect failure before uploading potentially gigabytes of data. RESTful web services use the HTTP 1.1 Expect: 100-continue in the request to implement this. For example Amazon S3's REST API can validate your key and ACLs in response to an object PUT operation, returning 100 Continue if all is well, indicating you may proceed to send your data. I've rummaged around the WCF documentation and I just can't see a way to accomplish this without doing some pretty low-level hooking into the HTTP request processing pipeline. How would you suggest I solve this problem?

    Read the article

  • Android Release Version and "Waiting for Debugger"

    - by Tom Richards
    I know this has been asked before but I still don't have a solution. My first app: developed and debugged on my moto droid and then followed all the release steps, (exported from Eclipse, using my key to sign) including removing the debug in the manifest xml. I copied the resulting apk to the droid, disconnected the usb and installed it by double clicking on the file using Astro. I get the "Waiting for Debugger" message like when I am debugging but it never goes away. Doing something real stupid I know but I can't figure it out. Any help would be appreciated. Thanks, Tom

    Read the article

  • Auto-Configuring SSIS Packages

    - by Davide Mauri
    SSIS Package Configurations are very useful to make packages flexible so that you can change objects properties at run-time and thus make the package configurable without having to open and edit it. In a complex scenario where you have dozen of packages (even in in the smallest BI project I worked on I had 50 packages), each package may have its own configuration needs. This means that each time you have to run the package you have to pass the correct Package Configuration. I usually use XML configuration files and I also force everyone that works with me to make sure that an object that is used in several packages has the same name in all package where it is used, in order to simplify configurations usage. Connection Managers are a good example of one of those objects. For example, all the packages that needs to access to the Data Warehouse database must have a Connection Manager named DWH. Basically we define a set of “global” objects so that we can have a configuration file for them, so that it can be used by all packages. If a package as some specific configuration needs, we create a specific – or “local” – XML configuration file or we set the value that needs to be configured at runtime using DTLoggedExec’s Package Parameters: http://dtloggedexec.davidemauri.it/Package%20Parameters.ashx Now, how we can improve this even more? I’d like to have a package that, when it’s run, automatically goes “somewhere” and search for global or local configuration, loads it and applies it to itself. That’s the basic idea of Auto-Configuring Packages. The “somewhere” is a SQL Server table, defined in this way In this table you’ll put the values that you want to be used at runtime by your package: The ConfigurationFilter column specify to which package that configuration line has to be applied. A package will use that line only if the value specified in the ConfigurationFilter column is equal to its name. In the above sample. only the package named “simple-package” will use the line number two. There is an exception here: the $$Global value indicate a configuration row that has to be applied to any package. With this simple behavior it’s possible to replicate the “global” and the “local” configuration approach I’ve described before. The ConfigurationValue contains the value you want to be applied at runtime and the PackagePath contains the object to which that value will be applied. The ConfiguredValueType column defined the data type of the value and the Checksum column is contains a calculated value that is simply the hash value of ConfigurationFilter plus PackagePath so that it can be used as a Primary Key to guarantee uniqueness of configuration rows. As you may have noticed the table is very similar to the table originally used by SSIS in order to put DTS Configuration into SQL Server tables: SQL Server SSIS Configuration Type: http://msdn.microsoft.com/en-us/library/ms141682.aspx Now, how it works? It’s very easy: you just have to call DTLoggedExec with the /AC option: DTLoggedExec.exe /FILE:”mypackage.dtsx” /AC:"localhost;ssis_auto_configuration;ssiscfg.configuration" the AC option expects a string with the following format: <database_server>;<database_name>;<table_name>; only Windows Authentication is supported. When DTLoggedExec finds an Auto-Configuration request, it injects a new connection manager in the loaded package. The injected connection manager is named $$DTLoggedExec_AutoConfigure and is used by the two SQL Server DTS Configuration ($$DTLoggedExec_Global and $$DTLoggedExec_Local) also injected by DTLoggedExec, used to load “local” and “global” configuration. Now, you may start to wonder why this approach cannot be used without having all this stuff going around, but just passing to a package always two XML DTS Configuration files, (to have to “local” and the “global” configurations) doing something like this: DTLoggedExec.exe /FILE:”mypackage.dtsx” /CONF:”global.dtsConfig” /CONF:”mypackage.dtsConfig” The problem is that this approach doesn’t work if you have, in one of the two configuration file, a value that has to be applied to an object that doesn’t exists in the loaded package. This situation will raise an error that will halt package execution. To solve this problem, you may want to create a configuration file for each package. Unfortunately this will make deployment and management harder, since you’ll have to deal with a great number of configuration files. The Auto-Configuration approach solve all these problems at once! We’re using it in a project where we have hundreds of packages and I can tell you that deployment of packages and their configuration for the pre-production and production environment has never been so easy! To use the Auto-Configuration option you have to download the latest DTLoggedExec release: http://dtloggedexec.codeplex.com/releases/view/62218 Feedback, as usual, are very welcome!

    Read the article

  • NSRangeException Issue

    - by Garry
    Hi, I am writing a Core Data-based iPhone app and I am new to Objective-C. I have a bug that I am really struggling to nail. The iPhone simulator keeps crashing with the following error message: 2010-03-21 17:37:40.583 Patients[3689:207] * Terminating app due to uncaught exception 'NSRangeException', reason: '* -[NSCFArray insertObject:atIndex:]: index (1) beyond bounds (1)' 2010-03-21 17:37:40.585 Patients[3689:207] Stack: ( 31007835, 2516698377, 31091771, 31091610, 601273, 197333, 3194546, 3141378, 25020, 29768673, 214570, 30740485, 204512, 29114749, 29505379, 29001194, 29252410, 29190487, 30794322, 30791263, 30788680, 39097877, 39098074, 2883503, 9912, 9766 ) This error happens when I press return on a textField. What happens when the return key is pressed is that an attribute on an entity is updated. I don't know what array is out of bounds as I am not using any arrays in my code! Is there any way of getting more detail as to where in my code the error is?? Thanks,

    Read the article

  • Java: matching two different type of array

    - by sling
    Hi, I am doing a password login that requires me to match two array: User and Pass. If user key in "mark" and "pass", it should show successfully. However I have trouble with the String[] input = pass.getPassword(); and the matching of the two arrays. String[] User = {"mark", "susan", "bobo"}; String[] Pass = {"pass", "word", "password"}; String[] input = pass.getPassword(); if(Pass.length == input.length && user.getText().equals(User)) { lblstat.setForeground(Color.GREEN); lblstat.setText("Successful"); } else { lblstat.setForeground(Color.RED); lblstat.setText("Failed"); }

    Read the article

  • Google Maps - user to pinpoint a location

    - by JohnB
    Hi, Is it possible to allow users of my website to mark places on a map I display using Google Maps API? I need to then save that location coordinates to a db. I've been looking through the google maps API, I found that I can use the web service to do searches like this: http://maps.google.com/maps/geo?q=Maine,+United+States&output=json&oe=utf8\&sensor=false&key=my_key But I am not sure it's working on a house number level (which I need it to) and I'm not sure how to display a 'did you mean?' to the user when he misspells the address.. Anyone have an idea? Thanks,

    Read the article

  • Project Management Helps AmeriCares Deliver International Aid

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Alison Weiss Handle with Care Sound project management helps AmeriCares bring international aid to those in need. The stakes are always high for AmeriCares. On a mission to restore health and save lives during times of disaster, the nonprofit international relief and humanitarian aid organization delivers donated medicines, medical supplies, and humanitarian aid to people in the U.S. and around the globe. Founded in 1982 with the express mission of responding as quickly and efficiently as possible to help people in need, the Stamford, Connecticut-based AmeriCares has delivered more than US$10.5 billion in aid to 147 countries over the past three decades. Launch the Slideshow “It’s critically important to us that we steward all the donations and that the medical supplies and medicines get to people as quickly as possible with no loss,” says Kate Sears, senior vice president for finance and technology at AmeriCares. “Whether we’re shipping IV solutions to victims of cholera in Haiti or antibiotics to Somali famine victims, we need to get the medicines there sooner because it means more people will be helped and lives improved or even saved.” Ten years ago, the tracking systems used by AmeriCares associates were paper-based. In recent years, staff started using spreadsheets, but the tracking processes were not standardized between teams. “Every team was tracking completely different information,” says Megan McDermott, senior associate, Sub-Saharan Africa partnerships, at AmeriCares. “It was just a few key things. For example, we tracked the date a shipment was supposed to arrive and the date we got reports from our partner that a hospital received aid on their end.” While the data was accurate, much detail was being lost in the process. AmeriCares management knew it could do a better job of tracking this enterprise data and in 2011 took a significant step by implementing Oracle’s Primavera P6 Professional Project Management. “It’s a comprehensive solution that has helped us improve the monitoring and controlling processes. It has allowed us to do our distribution better,” says Sears. In addition, the implementation effort has been a change agent, helping AmeriCares leadership rethink project management across the entire organization. Initially, much of the focus was on standardizing processes, but staff members also learned the importance of thinking proactively to prevent possible problems and evaluating results to determine if goals and objectives are truly being met. Such data about process efficiency and overall results is critical not only to AmeriCares staff but also to the donors supporting the organization’s life-saving missions. Efficiency Saves Lives One of AmeriCares’ core operations is to gather product donations from the private sector, establish where the most-urgent needs are, and solicit monetary support to send the aid via ocean cargo or airlift to welfare- and health-oriented nongovernmental organizations, hospitals, health networks, and government ministries based in areas in need. In 2011 alone, AmeriCares sent more than 3,500 shipments to 95 countries in response to both ongoing humanitarian needs and more than two dozen emergencies, including deadly tornadoes and storms in the U.S. and the devastating tsunami in Japan. When it comes to nonprofits in general, donors want to know that the charitable organizations they support are using funds wisely. Typically, nonprofits are evaluated by donors in terms of efficiency, an area where AmeriCares has an excellent reputation: 98 percent of expenses go directly to supporting programs and less than 2 percent represent administrative and fundraising costs. Donors, however, should look at more than simple efficiency, says Peter York, senior partner and chief research and learning officer at TCC Group, a nonprofit consultancy headquartered in New York, New York. They should also look at whether organizations have the systems in place to sustain their missions and continue to thrive. An expert on nonprofit organizational management, York has spent years studying sustainable charitable organizations. He defines them as nonprofits that are able to achieve the ongoing financial support to stay relevant and continue doing core mission work. In his analysis of well over 2,500 larger nonprofits, York has found that many are not sustaining, and are actually scaling back in size. “One of the biggest challenges of nonprofit sustainability is the general public’s perception that every dollar donated has to go only to the delivery of service,” says York. “What our data shows is that there are some fundamental capacities that have to be there in order for organizations to sustain and grow.” York’s research highlights the importance of data-driven leadership at successful nonprofits. “You’ve got to have the tools, the systems, and the technologies to get objective information on what you do, the people you serve, and the results you’re achieving,” says York. “If leaders don’t have the knowledge and the data, they can’t make the strategic decisions about programs to take organizations to the next level.” Historically, AmeriCares associates have used time-tested and cost-effective strategies to ship and then track supplies from donation to delivery to their destinations in designated time frames. When disaster strikes, AmeriCares ships by air and generally pulls out all the stops to deliver the most urgently needed aid within the first few days and weeks. Then, as situations stabilize, AmeriCares turns to delivering sea containers for the postemergency and ongoing aid so often needed over the long term. According to McDermott, getting a shipment out the door is fairly complicated, requiring as many as five different AmeriCares teams collaborating together. The entire process can take months—from when products are received in the warehouse and deciding which recipients to allocate supplies to, to getting customs and governmental approvals in place, actually shipping products, and finally ensuring that the products are received in-country. Delivering that aid is no small affair. “Our volume exceeds half a billion dollars a year worth of donated medicines and medical supplies, so it’s a sizable logistical operation to bring these products in and get them out to the right place quickly to have the most impact,” says Sears. “We really pride ourselves on our controls and efficiencies.” Adding to that complexity is the fact that the longer it takes to deliver aid, the more dire the human need can be. Any time AmeriCares associates can shave off the complicated aid delivery process can translate into lives saved. “It’s really being able to track information consistently that will help us to see where are the bottlenecks and where can we work on improving our processes,” says McDermott. Setting a Standard Productivity and information management improvements were key objectives for AmeriCares when staff began the process of implementing Oracle’s Primavera solution. But before configuring the software, the staff needed to take the time to analyze the systems already in place. According to Greg Loop, manager of database systems at AmeriCares, the organization received guidance from several consultants, including Rich D’Addario, consulting project manager in the Primavera Global Business Unit at Oracle, who was instrumental in shepherding the critical requirements-gathering phase. D’Addario encouraged staff to begin documenting shipping processes by considering the order in which activities occur and which ones are dependent on others to get accomplished. This exercise helped everyone realize that to be more efficient, they needed to keep track of shipments in a more standard way. “The staff didn’t recognize formal project management methodology,” says D’Addario. “But they did understand what the most important things are and that if they go wrong, an entire project can go off course.” Before, if a boatload of supplies was being sent to Haiti and there was a problem somewhere, a lot of time was taken up finding out where the problem was—because staff was not tracking things in a standard way. As a result, even more time was needed to find possible solutions to the problem and alert recipients that the aid might be delayed. “For everyone to put on the project manager hat and standardize the way every single thing is done means that now the whole organization is on the same page as to what needs to occur from the time a hurricane hits Haiti and when a boat pulls in to unload supplies,” says D’Addario. With so much care taken to put a process foundation firmly in place, configuring the Primavera solution was actually quite simple. Specific templates were set up for different types of shipments, and dashboards were implemented to provide executives with clear overviews of every project in the system. AmeriCares’ Loop reports that system planning, refining, and testing, followed by writing up documentation and training, took approximately four months. The system went live in spring 2011 at AmeriCares’ Connecticut headquarters. While the nonprofit has an international presence, with warehouses in Europe and offices in Haiti, India, Japan, and Sri Lanka, most donated medicines come from U.S. entities and are shipped from the U.S. out to the rest of the world. In addition, all shipments are tracked from the U.S. office. AmeriCares doesn’t expect the Primavera system to take months off the shipping time, especially for sea containers. However, any time saved is still important because it will allow aid to be delivered to people more quickly at a lower overall cost. “If we can trim a day or two here or there, that can translate into lives that we’re saving, especially in emergency situations,” says Sears. A Cultural Change Beyond the measurable benefits that come with IT-driven process improvement, AmeriCares management is seeing a change in culture as a result of the Primavera project. One change has been treating every shipment of aid as a project, and everyone involved with facilitating shipments as a project manager. “This is a revolutionary concept for us,” says McDermott. “Before, we were used to thinking we were doing logistics—getting a container from point A to point B without looking at it as one project and really understanding what it meant to manage it.” AmeriCares staff is also happy to report that collaboration within the organization is much more efficient. When someone creates a shipment in the Primavera system, the same shared template is used, which means anyone can log in to the system to see the status of a shipment. Knowledgeable staff can access a shipment project to help troubleshoot a problem. Management can easily check the status of projects across the organization. “Dashboards are really useful,” says McDermott. “Instead of going into the details of each project, you can just see the high-level real-time information at a glance.” The new system is helping team members focus on proactively managing shipments rather than simply reacting when problems occur. For example, when a container is shipped, documents must be included for customs clearance. Now, the shipping template has built-in reminders to prompt team members to ask for copies of these documents from freight forwarders and to follow up with partners to discover if a shipment is on time. In the past, staff may not have worked on securing these documents until they’d been notified a shipment had arrived in-country. Another benefit of capturing and adopting best practices within the Primavera system is that staff training is easier. “Capturing the processes in documented steps and milestones allows us to teach new staff members how to do their jobs faster,” says Sears. “It provides them with the knowledge of their predecessors so they don’t have to keep reinventing the wheel.” With the Primavera system already generating positive results, management is eager to take advantage of advanced capabilities. Loop is working on integrating the company’s proprietary inventory management system with the Primavera system so that when logistics or warehousing operators input data, the information will automatically go into the Primavera system. In the past, this information had to be manually keyed into spreadsheets, often leading to errors. Mining Historical Data Another feature on the horizon for AmeriCares is utilizing Primavera P6 Professional Project Management reporting capabilities. As the system begins to include more historical data, management soon will be able to draw on this information to conduct analysis that has not been possible before and create customized reports. For example, at the beginning of the shipment process, staff will be able to use historical data to more accurately estimate how long the approval process should take for a particular country. This could help ensure that food and medicine with limited shelf lives do not get stuck in customs or used beyond their expiration dates. The historical data in the Primavera system will also help AmeriCares with better planning year to year. The nonprofit’s staff has always put together a plan at the beginning of the year, but this has been very challenging simply because it is impossible to predict disasters. Now, management will be able to look at historical data and see trends and statistics as they set current objectives and prepare for future need. In addition, this historical data will provide AmeriCares management with the ability to review year-end data and compare actual project results with goals set at the beginning of the year—to see if desired outcomes were achieved and if there are areas that need improvement. It’s this type of information that is so valuable to donors. And, according to York, project management software can play a critical role in generating the data to help nonprofits sustain and grow. “It is important to invest in systems to help replicate, expand, and deliver services,” says York. “Project management software can help because it encourages nonprofits to examine program or service changes and how to manage moving forward.” Sears believes that AmeriCares donors will support the return on investment the organization will achieve with the Primavera solution. “It won’t be financial returns, but rather how many more people we can help for a given dollar or how much more quickly we can respond to a need,” says Sears. “I think donors are receptive to such arguments.” And for AmeriCares, it is all about the future and increasing results. The project management environment currently may be quite simple, but IT staff plans to expand the complexity and functionality as the organization grows in its knowledge of project management and the goals it wants to achieve. “As we use the system over time, we’ll continue to refine our best practices and accumulate more data,” says Sears. “It will advance our ability to make better data-driven decisions.”

    Read the article

  • Python: Decent config file format

    - by miracle2k
    I'd like to use a configuration file format which supports key value pairs and nestable, repeatable structures, and which is as light on syntax as possible. I'm imagining something along the lines of: cachedir = /var/cache mail_to = [email protected] job { name = my-media frequency = 1 day source { from = /home/michael/Images source { } source { } } job { } I'd be happy with something using significant-whitespace as well. JSON requires too many explicit syntax rules (quoting, commas, etc.). YAML is actually pretty good, but would require the jobs to be defined as a YAML list, which I find slightly awkward to use.

    Read the article

  • What stack would you use for a new web Java project if you started today?

    - by Joelio
    If you were to start a brand new java project today with the following requirements: high scale (20k + users) you want to use something that is fairly mature (not changing dramatically) & wont be dead technology in 3 years you want something very productive (no server restarts in dev, save the code and its automatically compiled and deployed), productivity and time to market are key. some amount of ajax on front end no scripting language, jruby, groovy, php etc , has to be java has to support i8n What stack would you use & Why? (when I say stack, I mean, everything soup to nuts, so app server, mvc framework, bean framework, ORM framework, javascript framework etc...)

    Read the article

  • Why does my NSWindow only receive mouseOver events the first time?

    - by DanieL
    I have an application where a borderless window is shown and hidden, using orderOut and orderFront. When it is visible, I want the it to become the key window when the mouse moves over it. So far I've done this: In awakeFromNib I have set its first responder to itself. In the window's constructor I set accepts mouse events to YES. In the mouseMoved method, I use makeKeyAndOrderToFront. My problem is, that this only works the first time I move the mouse over the window. After that, it doesn't receive any mouseOver events. I've tried checking the firstResponder but as far as I can tell it never changes from the window. Any ideas what I can do to get this working?

    Read the article

  • OOP Design Question - Where/When do you Validate properties?

    - by JW
    I have read a few books on OOP DDD/PoEAA/Gang of Four and none of them seem to cover the topic of validation - it seems to be always assumed that data is valid. I gather from the answers to this post (http://stackoverflow.com/questions/1651964/oop-design-question-validating-properties) that a client should only attempt to set a valid property value on a domain object. This person has asked a similar question that remains unanswered: http://bytes.com/topic/php/answers/789086-php-oop-setters-getters-data-validation#post3136182 So how do you ensure it is valid? Do you have a 'validator method' alongside every getter and setter? isValidName() setName() getName() I seem to be missing some key basic knowledge about OOP data validation - can you point me to a book that covers this topic in detail? - ie. covering different types of validation / invariants/ handling feedback / to use Exceptions or not etc

    Read the article

  • Facebook status.get API throws 500 HTTP status code

    - by Charles Prakash Dasari
    I have an APP that calls Facebook status.get method via the REST server - restserver.php using session key method. This app works fine for most of the users, but for one user I consistently receive HTTP 500 status code. Since this doesn't have any specific Facebook error message, it is almost impossible for me to debug this. Anyone faced a similar problem? What could be wrong with this user account? I checked the privacy options that I could think of and they look fine. Also, for the same user, I can use friends.get method without any problem. EDIT: I tried in Facebook forums as well, but it was of no use. Any pointers in the direction towards debugging/troubleshooting this problem are also appreciated.

    Read the article

  • Visualize the depth buffer

    - by Thanatos
    I'm attempting to visualize the depth buffer for debugging purposes, by drawing it on top of the actual rendering when a key is pressed. It's mostly working, but the resulting image appears to be zoomed in. (It's not just the original image, in an odd grayscale) Why is it not the same size as the color buffer? This is what I'm using the view the depth buffer: void get_gl_size(int &width, int &height) { int iv[4]; glGetIntegerv(GL_VIEWPORT, iv); width = iv[2]; height = iv[3]; } void visualize_depth_buffer() { int width, height; get_gl_size(width, height); float *data = new float[width * height]; glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, data); glDrawPixels(width, height, GL_LUMINANCE, GL_FLOAT, data); delete [] data; }

    Read the article

  • C# Custom Dictionary Take - Convert Back From IEnumerable

    - by Goober
    Scenario Having already read a post on this on the same site, which didn't work, I'm feeling a bit stumped but I'm sure I've done this before. I have a Dictionary. I want to take the first 200 values from the Dictionary. CODE Dictionary<int,SomeObject> oldDict = new Dictionary<int,SomeObject>(); //oldDict gets populated somewhere else. Dictionary<int,SomeObject> newDict = new Dictionary<int,SomeObject>(); newDict = oldDict.Take(200).ToDictionary(); OBVIOUSLY, the take returns an IENumerable, so you have to run ToDictionary() to convert it back to a dictionary of the same type. HOWEVER, it just doesn't work, it wants some random key selector thing - or something? I have even tried just casting it but to no avail. Any ideas?

    Read the article

  • Why index_merge is not used here?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing?

    Read the article

  • Declarative Architectures in Infrastructure as a Service (IaaS)

    - by BuckWoody
    I deal with computing architectures by first laying out requirements, and then laying in any constraints for it's success. Only then do I bring in computing elements to apply to the system. As an example, a requirement might be "world-side availability" and a constraint might be "with less than 80ms response time and full HA" or something similar. Then I can choose from the best fit of technologies which range from full-up on-premises computing to IaaS, PaaS or SaaS. I also deal in abstraction layers - on-premises systems are fully under your control, in IaaS the hardware is abstracted (but not the OS, scale, runtimes and so on), in PaaS the hardware and the OS is abstracted and you focus on code and data only, and in SaaS everything is abstracted - you merely purchase the function you want (like an e-mail server or some such) and simply use it. When you think about solutions this way, the architecture moves to the primary factor in your decision. It's problem-first architecting, and then laying in whatever technology or vendor best fixes the problem. To that end, most architects design a solution using a graphical tool (I use Visio) and then creating documents that  let the rest of the team (and business) know what is required. It's the template, or recipe, for the solution. This is extremely easy to do for SaaS - you merely point out what the needs are, research the vendor and present the findings (and bill) to the business. IT might not even be involved there. In PaaS it's not much more complicated - you use the same Application Lifecycle Management and design tools you always have for code, such as Visual Studio or some other process and toolset, and you can "stamp out" the application in multiple locations, update it and so on. IaaS is another story. Here you have multiple machines, operating systems, patches, virus scanning, run-times, scale-patterns and tools and much more that you have to deal with, since essentially it's just an in-house system being hosted by someone else. You can certainly automate builds of servers - we do this as technical professionals every day. From Windows to Linux, it's simple enough to create a "build script" that makes a system just like the one we made yesterday. What is more problematic is being able to tie those systems together in a coherent way (as a solution) and then stamp that out repeatedly, especially when you might want to deploy that solution on-premises, or in one cloud vendor or another. Lately I've been working with a company called RightScale that does exactly this. I'll point you to their site for more info, but the general idea is that you document out your intent for a set of servers, and it will deploy them to on-premises clouds, Windows Azure, and other cloud providers all from the same script. In other words, it doesn't contain the images or anything like that - it contains the scripts to build them on-premises or on a cloud vendor like Microsoft. Using a tool like this, you combine the steps of designing a system (all the way down to passwords and accounts if you wish) and then the document drives the distribution and implementation of that intent. As time goes on and more and more companies implement solutions on various providers (perhaps for HA and DR) then this becomes a compelling investigation. The RightScale information is here, if you want to investigate it further. Yes, there are other methods I've found, but most are tied to a single kind of cloud, and I'm not into vendor lock-in. Poppa Bear Level - Hands-on EvaluateRightScale at no cost.  Just bring your Windows Azurecredentials and follow the these tutorials: Sign Up for Windows Azure Add     Windows Azure to a RightScale Account Windows Azure Virtual Machines     3-tier Deployment Momma Bear Level - Just the Right level... ;0)  WindowsAzure Evaluation Guide - if you are new toWindows Azure Virtual Machines and new to RightScale, we recommend that youread the entire evaluation guide to gain a more complete understanding of theWindows Azure + RightScale solution.    WindowsAzure Support Page @ support.rightscale.com - FAQ's, tutorials,etc. for  Windows Azure Virtual Machines (Work in Progress) Baby Bear Level - Marketing WindowsAzure Page @ www.rightscale.com - find overview informationincluding solution briefs and presentation & demonstration videos   Scale     and Automate Applications on Windows Azure  Solution Brief     - how RightScale makes Windows Azure Virtual Machine even better SQL     Server on Windows Azure  Solution Brief   -       Run Highly Available SQL Server on Windows Azure Virtual Machines

    Read the article

  • Hibernate: @UniqueConstraint with @ManyToOne field???

    - by Misha Koshelev
    Dear All: Using the following code: @Entity @Table(uniqueConstraints=[@UniqueConstraint(columnNames=["account","name"])]) class Friend { @Id @GeneratedValue(strategy=GenerationType.AUTO) public Long id @ManyToOne public Account account public String href public String name } I get the following error: org.hibernate.AnnotationException: Unable to create unique key constraint (account, name) on table Friend: account not found It seems this has to do with the @ManyToOne constraint, which I imagine actually creates a separate UniqueConstraint??? In any case, if I take this out, there is no complaint about the UniqueConstraint, but there is another error which makes me believe it must be left in. org.hibernate.MappingException: Could not determine type for: com.mksoft.fbautomate.domain.Account, at table: Friend, for columns: [org.hibernate.mapping.Column(account)] Any hints how I can create such a desired constraint (i.e., that each combination of account and name occurs only once???) Thank you! Misha

    Read the article

  • Simple number-to-number (or number-to-hex) encryption algorithm that minimizes # of characters

    - by Clay Nichols
    I need to encrypt a number and I and this encrypted value will be given to a customer ask a key so I want to minimize the number of digits and make them all printable. So I'd like the result to be either all number or all Hex characters. The current encryption method I'm using (for non numbers) converts the characters to hex (2 hex digits each). That doubles the number of characters. I also considered just treating the input as hex (so each pair of numbers is treated as a Hex pair, but then you have ambiguity between an input of 0123 and 123 (when decrypting that leading '0' is lost. Any suggestions?

    Read the article

< Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >