Search Results

Search found 20799 results on 832 pages for 'long integer'.

Page 530/832 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • How to "kill" background worker completely?

    - by Ken Hung
    Hi All, I am writing a windows application that runs a sequence of digital IO actions repeatedly. This sequence of actions starts when the user click a "START" button, and it is done by a background worker in backgroundWorker1_DoWork(). However, there are occasions when I get the "This backgroundworker is currently busy......." error message. I am thinking of implementing the following in the code, by using a while loop to "kill" the background worker before starting another sequence of action: if (backgroundWorker1.IsBusy == true) { backgroundWorker1.CancelAsync(); while (backgroundWorker1.IsBusy == true) { backgroundWorker1.CancelAsync(); } backgroundWorker1.Dispose(); } backgroundWorker1.RunWorkerAsync(); I think my main concern is, will the backgroundWorker1 be "killed" eventually? If it will, will it take a long time to complete it? Will this coding get me into an infinite loop?

    Read the article

  • Which C# / .NET free or standard bits and pieces do I need to make a simple Windows desktop app back

    - by jjujuma
    I'm coming from a Java / web background with no C# experience and I want to write a prototype C# / .NET desktop app to run against my existing DB2 database. The idea is that the prototype should use libraries and tools which are suitable for scaling up to full production and should be standard and free. Of the top of my head, the two biggest things I need are: an IDE a GUI toolkit / set of components a JDBC equivalent and/or possibly a full blown ORM system What are my options? Note I don't mind paying for full blown Visual Studio in the long run, but for now everything needs to be free, including the IDE.

    Read the article

  • PyGTK: Doubleclick on CellRenderer

    - by rami
    Hello! In my PyGTK application I currently use 'editable' to make cells editable. But since my cell contents sometimes are really really large I want to ask the user for changes in a new window when he doubleclicks on a cell. But I could not find out how to hook on double-clicks on specific cellrenderers - I don't want to edit the whole row and I also don't want to set this callback for the whole row, only for columns where too long content can occur. How can I do this with CellRendererText() or something similar. My currently cell-generating code is: cols[i] = gtk.TreeViewColumn(coltitle) cells[i] = gtk.CellRendererText() cols[i].pack_start(cells[i]) cols[i].add_attribute(cells[i], 'text', i) cols[i].set_sizing(gtk.TREE_VIEW_COLUMN_FIXED) cols[i].set_fixed_width(100) cells[i].set_property('editable', True) cells[i].connect('edited', self.edited, (i, ls)) cols[i].set_resizable(True) mytreeview.append_column(cols[i]) Thanks!

    Read the article

  • NServiceBus pipeline with Distributors

    - by David
    I'm building a processing pipeline with NServiceBus but I'm having trouble with the configuration of the distributors in order to make each step in the process scalable. Here's some info: The pipeline will have a master process that says "OK, time to start" for a WorkItem, which will then start a process like a flowchart. Each step in the flowchart may be computationally expensive, so I want the ability to scale out each step. This tells me that each step needs a Distributor. I want to be able to hook additional activities onto events later. This tells me I need to Publish() messages when it is done, not Send() them. A process may need to branch based on a condition. This tells me that a process must be able to publish more than one type of message. A process may need to join forks. I imagine I should use Sagas for this. Hopefully these assumptions are good otherwise I'm in more trouble than I thought. For the sake of simplicity, let's forget about forking or joining and consider a simple pipeline, with Step A followed by Step B, and ending with Step C. Each step gets its own distributor and can have many nodes processing messages. NodeA workers contain a IHandleMessages processor, and publish EventA NodeB workers contain a IHandleMessages processor, and publish Event B NodeC workers contain a IHandleMessages processor, and then the pipeline is complete. Here are the relevant parts of the config files, where # denotes the number of the worker, (i.e. there are input queues NodeA.1 and NodeA.2): NodeA: <MsmqTransportConfig InputQueue="NodeA.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeA.Distrib.Control" DistributorDataAddress="NodeA.Distrib.Data" > <MessageEndpointMappings> </MessageEndpointMappings> </UnicastBusConfig> NodeB: <MsmqTransportConfig InputQueue="NodeB.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeB.Distrib.Control" DistributorDataAddress="NodeB.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventA, Messages" Endpoint="NodeA.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> NodeC: <MsmqTransportConfig InputQueue="NodeC.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeC.Distrib.Control" DistributorDataAddress="NodeC.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventB, Messages" Endpoint="NodeB.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> And here are the relevant parts of the distributor configs: Distributor A: <add key="DataInputQueue" value="NodeA.Distrib.Data"/> <add key="ControlInputQueue" value="NodeA.Distrib.Control"/> <add key="StorageQueue" value="NodeA.Distrib.Storage"/> Distributor B: <add key="DataInputQueue" value="NodeB.Distrib.Data"/> <add key="ControlInputQueue" value="NodeB.Distrib.Control"/> <add key="StorageQueue" value="NodeB.Distrib.Storage"/> Distributor C: <add key="DataInputQueue" value="NodeC.Distrib.Data"/> <add key="ControlInputQueue" value="NodeC.Distrib.Control"/> <add key="StorageQueue" value="NodeC.Distrib.Storage"/> I'm testing using 2 instances of each node, and the problem seems to come up in the middle at Node B. There are basically 2 things that might happen: Both instances of Node B report that it is subscribing to EventA, and also that NodeC.Distrib.Data@MYCOMPUTER is subscribing to the EventB that Node B publishes. In this case, everything works great. Both instances of Node B report that it is subscribing to EventA, however, one worker says NodeC.Distrib.Data@MYCOMPUTER is subscribing TWICE, while the other worker does not mention it. In the second case, which seem to be controlled only by the way the distributor routes the subscription messages, if the "overachiever" node processes an EventA, all is well. If the "underachiever" processes EventA, then the publish of EventB has no subscribers and the workflow dies. So, my questions: Is this kind of setup possible? Is the configuration correct? It's hard to find any examples of configuration with distributors beyond a simple one-level publisher/2-worker setup. Would it make more sense to have one central broker process that does all the non-computationally-intensive traffic cop operations, and only sends messages to processes behind distributors when the task is long-running and must be load balanced? Then the load-balanced nodes could simply reply back to the central broker, which seems easier. On the other hand, that seems at odds with the decentralization that is NServiceBus's strength. And if this is the answer, and the long running process's done event is a reply, how do you keep the Publish that enables later extensibility on published events?

    Read the article

  • C# MySQL Lost Connection

    - by Adam
    Hi. I have C# application and I'm using MySQL database. Everything seems to be fine except one thing. Our computer network is little bit unstable. When I'm trying to execute query and the computer simultaneously loses connection to the mysql server (I'm simulating this situation by unplugging the network cable from computer which is mysql server), the program is trying to do something for long time (tens seconds). I would like to specify something like timeout which ends the query by exception or something similar. I tried to add timeout parameters to connection string but with no effect (I've used ConnectionTimeout and DefaultCommandTimeout). Is there any other way to identify lost connection after few seconds? Thank you Adam P.S. Sorry for my english, I'm not native speaker.

    Read the article

  • How to put a newline into a column header in an xtable in R

    - by PaulHurleyuk
    I have a dataframe that I am putting into a sweave document using xtable, however one of my column names is quite long, and I would like to break it over two lines to save space calqc_table<-structure(list(RUNID = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), ANALYTEINDEX = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), ID = structure(1:11, .Label = c("Cal A", "Cal B", "Cal C", "Cal D", "Cal E", "Cal F", "Cal G", "Cal H", "Cal High", "Cal Low", "Cal Mid"), class = "factor"), mean_conc = c(200.619459644855, 158.264703128903, 102.469121407733, 50.3551544728544, 9.88296440865076, 4.41727762501703, 2.53494715706024, 1.00602831741361, 199.065054555735, 2.48063347296935, 50.1499780776199), sd_conc = c(2.3275711264554, NA, NA, NA, NA, NA, NA, 0.101636943231162, 0, 0, 0), nrow = c(3, 1, 1, 1, 1, 1, 1, 3, 2, 2, 2)), .Names = c("Identifier of the Run within the Study", "ANALYTEINDEX", "ID", "mean_conc", "sd_conc", "nrow"), row.names = c(NA, -11L ), class = "data.frame") calqc_xtable<-xtable(calqc_table) I have tried putting a newline into the name, but this didn't seem to work names(calqc_table)[1]<-"Identifier of the \nRun within the Study" Is there a way to do this ? I have seen someone suggest using the latex function from the hmisc package to manually iterate over the table and write it out in latex manually, including the newline, but this seems like a bit of a faf !

    Read the article

  • Streaming files from EventMachine handler?

    - by Noah
    I am creating a streaming eventmachine server. I'm concerned about avoiding blocking IO or doing anything else to muck up the event loop. From what I've read, ruby's non-blocking IO can be used to stream files in a non-blocking way, or I can call next_tick, but I'm a little unclear about which of these approaches is preferable. Part of the problem is that I have not found a good explanation of non-blocking IO library functions in ruby. Short version: Assuming a long-lived network IO operation, several wall clock minutes of streaming per file, transfer, what is the best way to do this in eventmachine without gumming up the event loop? while 1 do file.read do |bytes| @conn.send_data bytes end end I understand that the above code will block and I'm wondering what to put in its place. Also, I cannot use the FileStreamer class that is part of eventmachine as is, because I need to manipulate the data after it's read but before it's sent. Thanks, Noah

    Read the article

  • HTML Table <thead> Position Fixed

    - by Bry4n
    Variations of this question have been answered, but none of which dealing with my issue. I don't know if this can be achieved or not but I thought if anyone knew they would know on here. I have a table with a thead and one row with many columns. I want the thead to remain fixed (the row is extremely long) however when the thead is fixed, it gets cut off at the end of the browser. Is there a way I can keep it fixed but when you scroll horozontally for the entire page you can see the rest of the thead?

    Read the article

  • javax.sql.DataSource.getConnection() locks system

    - by Ryan Elkins
    I'm using the Apache Commons DBCP library for connection pooling in a desktop application. I've done this before and never had a problem but the latest application has started sometimes locking up on the call to getConnection() on my DataSource. The application just hangs after that call. I'm closing up my resources when I'm done with them. Is there any known reason why this might happen? I'm not even sure where to being troubleshooting this now that I've got it narrowed down to this method. It doesn't always hang - sometimes it happens fairly quickly, sometimes it takes a long time. Sometimes it doesn't happen at all, although lately I can get it to happen within a few minutes.

    Read the article

  • How to set timeout with python-mechanize?

    - by Michal Cihar
    I'm using python-mechanize to scrape some web sites, which sometime simply don't respond to requests and these requests stay open too long, so I need to limit timeout for these requests. While using urlopen method, the timeout can be set using timeout parameter, but I have not found easy way for doing it with high level API such as submit or click methods. Ideally the timeout would be set just once for whole browser class and all calls would honor that. It would be probably possible to customize this by passing custom request_class to every click and submit call, but this would just pollute the code, so I'm looking for nicer solution for setting timeout for mechanize's browser class (and no, I don't want to change default socket timeout using socket.setdefaulttimeout).

    Read the article

  • Timeline graph - how to handle "time gaps"?

    - by ebae
    I've been working with Google chart API and annotated timeline. Drawing graphs is fine. I have no problem. However, I need to draw a timeline graph for share prices. And as you may know, share prices are meaningful only between certain times (e.g. from 10AM to 4PM, when the market opens and closes). How do I change the Google timeline graph so that on X-Axis, the range is from 10AM-4PM? Right now, it just draws a long constant line between 4PM till 10AM next day before prices start to move again. Man, I hope that makes sens. (Google finance chart seems to do it). Thank you SO much for whoever can answer. You are a CHAMPION!

    Read the article

  • how to use dropnet, sharpbox and othere libraries in Monodroid

    - by sujit
    I have created on image uploader as a desktop application. Now I want to port it to Android using Monodroid. Application uploads images to dropbox. In the desktop version I have used "dropnet" which references "sharpbox", "Json", etc. Is there any way I can use those i.e. dropnet, sharpbox, etc in my monodroid app? It will take very long if I have to recode those libraries already available in .net . thanks. Sujit

    Read the article

  • The finest way to store the HardCore Data i android

    - by david
    Hi All! I am working on an application where I have a long listing of different states approximately 20 states and each state consisting of several zones in addition to the weather forcast of each zone eventually. Now the thing is that I have already created the Listing of hardcore data in a list view of all the states. Again I need to add the hardcore data of several zones for each state(min 4 to 5 zones/ state). So , now I am wondering that whether I have to create the separate classes with hardcore names of all the zones in each state or is there any easy way out for doing the same. It would be great if anyone having the idea can tell me. Thanks, david

    Read the article

  • How to ignore/prevent javadoc folder from validation during Eclipse Build?

    - by h2g2java
    In my war is a huge javadoc folder. There is no point in validating it since javadocs are produced by Sun(Oracle) javadoc utility. I have forgotten how I did it the last time. I need to tell Eclipse build not to validate that particular folder. Reasons why I need it: 1. the html produced by Sun javadoc generation utility does not meet the requirement that Eclipse uses - there is a bug report in Eclipse but Eclipse responds that Sun javadoc generator non-compliance is not their fault and that Eclipse intends to stick to their strict compliance. Which results in lots of html errors listed in the problems tab. 2. the javadoc folder is a remote link and high activity on that link is using up my cpu resource, and because it is a link to a remote location, that cpu high activity is sustained for long time until it finishes scanning the whole 35MB javadocs. Thanks - need help.

    Read the article

  • Best/Most Comprehensive API for Stocks/Financial Data

    - by Wilco
    What is the most recommended free/public API for accessing financial market stats and stock quotes (preferrably real-time quotes)? I'm not too picky about how it's exposed (SOAP, REST, some proprietary XML setup, etc.), as long as it's got some decent documentation. I'm planning to build a simple web dashboard in PHP with some basic data (basically a quick-n-dirty homepage), but may want to grow it into a full blown web app eventually. Any thoughts? As I find some, I'll post a list here (feel free to comment if you've used any of them before): Free opentick (soprano) Not Free XigniteRealTime

    Read the article

  • spring mvc vs seam

    - by darko petreski
    Hi, Spring mvc is a framework that has been long time out there, it is well documented and proven technology. A lot of web sites are using spring. Seam is a framework based on jsf - rich faces implementation. It has a lot of ajax based components. It uses some heavy stuff like EJB, JPA. All of this is prone to errors and this framework is so slow (at my computer it is almost impossible do develop something because it is really slow, especially redeploying on jboss) But is is very good for back office applications. Does someone have a professional experience with this two frameworks? Can you recommend the better one ? Why? Regards

    Read the article

  • Project Management Helps AmeriCares Deliver International Aid

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Alison Weiss Handle with Care Sound project management helps AmeriCares bring international aid to those in need. The stakes are always high for AmeriCares. On a mission to restore health and save lives during times of disaster, the nonprofit international relief and humanitarian aid organization delivers donated medicines, medical supplies, and humanitarian aid to people in the U.S. and around the globe. Founded in 1982 with the express mission of responding as quickly and efficiently as possible to help people in need, the Stamford, Connecticut-based AmeriCares has delivered more than US$10.5 billion in aid to 147 countries over the past three decades. Launch the Slideshow “It’s critically important to us that we steward all the donations and that the medical supplies and medicines get to people as quickly as possible with no loss,” says Kate Sears, senior vice president for finance and technology at AmeriCares. “Whether we’re shipping IV solutions to victims of cholera in Haiti or antibiotics to Somali famine victims, we need to get the medicines there sooner because it means more people will be helped and lives improved or even saved.” Ten years ago, the tracking systems used by AmeriCares associates were paper-based. In recent years, staff started using spreadsheets, but the tracking processes were not standardized between teams. “Every team was tracking completely different information,” says Megan McDermott, senior associate, Sub-Saharan Africa partnerships, at AmeriCares. “It was just a few key things. For example, we tracked the date a shipment was supposed to arrive and the date we got reports from our partner that a hospital received aid on their end.” While the data was accurate, much detail was being lost in the process. AmeriCares management knew it could do a better job of tracking this enterprise data and in 2011 took a significant step by implementing Oracle’s Primavera P6 Professional Project Management. “It’s a comprehensive solution that has helped us improve the monitoring and controlling processes. It has allowed us to do our distribution better,” says Sears. In addition, the implementation effort has been a change agent, helping AmeriCares leadership rethink project management across the entire organization. Initially, much of the focus was on standardizing processes, but staff members also learned the importance of thinking proactively to prevent possible problems and evaluating results to determine if goals and objectives are truly being met. Such data about process efficiency and overall results is critical not only to AmeriCares staff but also to the donors supporting the organization’s life-saving missions. Efficiency Saves Lives One of AmeriCares’ core operations is to gather product donations from the private sector, establish where the most-urgent needs are, and solicit monetary support to send the aid via ocean cargo or airlift to welfare- and health-oriented nongovernmental organizations, hospitals, health networks, and government ministries based in areas in need. In 2011 alone, AmeriCares sent more than 3,500 shipments to 95 countries in response to both ongoing humanitarian needs and more than two dozen emergencies, including deadly tornadoes and storms in the U.S. and the devastating tsunami in Japan. When it comes to nonprofits in general, donors want to know that the charitable organizations they support are using funds wisely. Typically, nonprofits are evaluated by donors in terms of efficiency, an area where AmeriCares has an excellent reputation: 98 percent of expenses go directly to supporting programs and less than 2 percent represent administrative and fundraising costs. Donors, however, should look at more than simple efficiency, says Peter York, senior partner and chief research and learning officer at TCC Group, a nonprofit consultancy headquartered in New York, New York. They should also look at whether organizations have the systems in place to sustain their missions and continue to thrive. An expert on nonprofit organizational management, York has spent years studying sustainable charitable organizations. He defines them as nonprofits that are able to achieve the ongoing financial support to stay relevant and continue doing core mission work. In his analysis of well over 2,500 larger nonprofits, York has found that many are not sustaining, and are actually scaling back in size. “One of the biggest challenges of nonprofit sustainability is the general public’s perception that every dollar donated has to go only to the delivery of service,” says York. “What our data shows is that there are some fundamental capacities that have to be there in order for organizations to sustain and grow.” York’s research highlights the importance of data-driven leadership at successful nonprofits. “You’ve got to have the tools, the systems, and the technologies to get objective information on what you do, the people you serve, and the results you’re achieving,” says York. “If leaders don’t have the knowledge and the data, they can’t make the strategic decisions about programs to take organizations to the next level.” Historically, AmeriCares associates have used time-tested and cost-effective strategies to ship and then track supplies from donation to delivery to their destinations in designated time frames. When disaster strikes, AmeriCares ships by air and generally pulls out all the stops to deliver the most urgently needed aid within the first few days and weeks. Then, as situations stabilize, AmeriCares turns to delivering sea containers for the postemergency and ongoing aid so often needed over the long term. According to McDermott, getting a shipment out the door is fairly complicated, requiring as many as five different AmeriCares teams collaborating together. The entire process can take months—from when products are received in the warehouse and deciding which recipients to allocate supplies to, to getting customs and governmental approvals in place, actually shipping products, and finally ensuring that the products are received in-country. Delivering that aid is no small affair. “Our volume exceeds half a billion dollars a year worth of donated medicines and medical supplies, so it’s a sizable logistical operation to bring these products in and get them out to the right place quickly to have the most impact,” says Sears. “We really pride ourselves on our controls and efficiencies.” Adding to that complexity is the fact that the longer it takes to deliver aid, the more dire the human need can be. Any time AmeriCares associates can shave off the complicated aid delivery process can translate into lives saved. “It’s really being able to track information consistently that will help us to see where are the bottlenecks and where can we work on improving our processes,” says McDermott. Setting a Standard Productivity and information management improvements were key objectives for AmeriCares when staff began the process of implementing Oracle’s Primavera solution. But before configuring the software, the staff needed to take the time to analyze the systems already in place. According to Greg Loop, manager of database systems at AmeriCares, the organization received guidance from several consultants, including Rich D’Addario, consulting project manager in the Primavera Global Business Unit at Oracle, who was instrumental in shepherding the critical requirements-gathering phase. D’Addario encouraged staff to begin documenting shipping processes by considering the order in which activities occur and which ones are dependent on others to get accomplished. This exercise helped everyone realize that to be more efficient, they needed to keep track of shipments in a more standard way. “The staff didn’t recognize formal project management methodology,” says D’Addario. “But they did understand what the most important things are and that if they go wrong, an entire project can go off course.” Before, if a boatload of supplies was being sent to Haiti and there was a problem somewhere, a lot of time was taken up finding out where the problem was—because staff was not tracking things in a standard way. As a result, even more time was needed to find possible solutions to the problem and alert recipients that the aid might be delayed. “For everyone to put on the project manager hat and standardize the way every single thing is done means that now the whole organization is on the same page as to what needs to occur from the time a hurricane hits Haiti and when a boat pulls in to unload supplies,” says D’Addario. With so much care taken to put a process foundation firmly in place, configuring the Primavera solution was actually quite simple. Specific templates were set up for different types of shipments, and dashboards were implemented to provide executives with clear overviews of every project in the system. AmeriCares’ Loop reports that system planning, refining, and testing, followed by writing up documentation and training, took approximately four months. The system went live in spring 2011 at AmeriCares’ Connecticut headquarters. While the nonprofit has an international presence, with warehouses in Europe and offices in Haiti, India, Japan, and Sri Lanka, most donated medicines come from U.S. entities and are shipped from the U.S. out to the rest of the world. In addition, all shipments are tracked from the U.S. office. AmeriCares doesn’t expect the Primavera system to take months off the shipping time, especially for sea containers. However, any time saved is still important because it will allow aid to be delivered to people more quickly at a lower overall cost. “If we can trim a day or two here or there, that can translate into lives that we’re saving, especially in emergency situations,” says Sears. A Cultural Change Beyond the measurable benefits that come with IT-driven process improvement, AmeriCares management is seeing a change in culture as a result of the Primavera project. One change has been treating every shipment of aid as a project, and everyone involved with facilitating shipments as a project manager. “This is a revolutionary concept for us,” says McDermott. “Before, we were used to thinking we were doing logistics—getting a container from point A to point B without looking at it as one project and really understanding what it meant to manage it.” AmeriCares staff is also happy to report that collaboration within the organization is much more efficient. When someone creates a shipment in the Primavera system, the same shared template is used, which means anyone can log in to the system to see the status of a shipment. Knowledgeable staff can access a shipment project to help troubleshoot a problem. Management can easily check the status of projects across the organization. “Dashboards are really useful,” says McDermott. “Instead of going into the details of each project, you can just see the high-level real-time information at a glance.” The new system is helping team members focus on proactively managing shipments rather than simply reacting when problems occur. For example, when a container is shipped, documents must be included for customs clearance. Now, the shipping template has built-in reminders to prompt team members to ask for copies of these documents from freight forwarders and to follow up with partners to discover if a shipment is on time. In the past, staff may not have worked on securing these documents until they’d been notified a shipment had arrived in-country. Another benefit of capturing and adopting best practices within the Primavera system is that staff training is easier. “Capturing the processes in documented steps and milestones allows us to teach new staff members how to do their jobs faster,” says Sears. “It provides them with the knowledge of their predecessors so they don’t have to keep reinventing the wheel.” With the Primavera system already generating positive results, management is eager to take advantage of advanced capabilities. Loop is working on integrating the company’s proprietary inventory management system with the Primavera system so that when logistics or warehousing operators input data, the information will automatically go into the Primavera system. In the past, this information had to be manually keyed into spreadsheets, often leading to errors. Mining Historical Data Another feature on the horizon for AmeriCares is utilizing Primavera P6 Professional Project Management reporting capabilities. As the system begins to include more historical data, management soon will be able to draw on this information to conduct analysis that has not been possible before and create customized reports. For example, at the beginning of the shipment process, staff will be able to use historical data to more accurately estimate how long the approval process should take for a particular country. This could help ensure that food and medicine with limited shelf lives do not get stuck in customs or used beyond their expiration dates. The historical data in the Primavera system will also help AmeriCares with better planning year to year. The nonprofit’s staff has always put together a plan at the beginning of the year, but this has been very challenging simply because it is impossible to predict disasters. Now, management will be able to look at historical data and see trends and statistics as they set current objectives and prepare for future need. In addition, this historical data will provide AmeriCares management with the ability to review year-end data and compare actual project results with goals set at the beginning of the year—to see if desired outcomes were achieved and if there are areas that need improvement. It’s this type of information that is so valuable to donors. And, according to York, project management software can play a critical role in generating the data to help nonprofits sustain and grow. “It is important to invest in systems to help replicate, expand, and deliver services,” says York. “Project management software can help because it encourages nonprofits to examine program or service changes and how to manage moving forward.” Sears believes that AmeriCares donors will support the return on investment the organization will achieve with the Primavera solution. “It won’t be financial returns, but rather how many more people we can help for a given dollar or how much more quickly we can respond to a need,” says Sears. “I think donors are receptive to such arguments.” And for AmeriCares, it is all about the future and increasing results. The project management environment currently may be quite simple, but IT staff plans to expand the complexity and functionality as the organization grows in its knowledge of project management and the goals it wants to achieve. “As we use the system over time, we’ll continue to refine our best practices and accumulate more data,” says Sears. “It will advance our ability to make better data-driven decisions.”

    Read the article

  • How to make teamcity's svn checkout more verbose?

    - by Benju
    We have a very large svn external containing about 30,000 500k files. This checkout can take a long time and we would like to see the progress in the TeamCity logs as it happens. Is there a way to use a more verbose logging when doing the svn checkout than just.... [19:26:00]: Updating sources: Agent side checkout... [19:26:00]: [Updating sources: Agent side checkout...] Will perform clean checkout. Reason: Checkout directory is empty or doesn't exist [19:26:00]: [Updating sources: Agent side checkout...] Cleaning /opt/TeamCity/buildAgent/work/937995fe3d15f1e7 [19:26:00]: [Updating sources: Agent side checkout...] VCS Root: guru 6 trunk with externals [19:26:00]: [VCS Root: guru 6 trunk with externals] revision: 6521_2010/04/27 19:25:58 -0500

    Read the article

  • Is SubSonic's CodingHorror the only way to do WHERE ISNULL?

    - by cantabilesoftware
    I'm trying to do a simple UPDATE ... WHERE ISNULL() using SubSonic ActiveRecord and the only way I can get it to work is by using CodingHorror. eg: public void MarkMessagesRead(long? from_person) { if (from_person.HasValue) { _db.Update<message>() .Set(x => x.is_read == true) .Where(x => x.from_id == from_person && x.to_id == people_id) .Execute(); } else { new SubSonic.Query.CodingHorror(_db.DataProvider, "UPDATE messages SET is_read=1 WHERE ISNULL(from_id) AND to_id=@toid", people_id).Execute(); } } Am I missing something?

    Read the article

  • Windows TCP connection - Still alive after process terminate

    - by Kartlee
    Hi People, I run a license server in linux and a process in Windows to check out tokens from it. It does a tcp socket connection to server to communicate and once the process in Windows is closed, the tokens are checked in back to server. But I see sometime the connection show as established in netstat output even when process in Windows is terminated. This happens when the process in Windows is running for a long time and terminate. It takes 2-3 hours for the connection to go away in neststat output. TCP BABDT350:4505 180.190.40.34:51847 ESTABLISHED 2832 [app.EXE] Can you guys tell me is this is a network stack configuration issue in Windows? Is it possible for the connection to go away once process terminate? Please let me know your answers to this.

    Read the article

  • Why is there no music streaming API service?

    - by Chad Johnson
    Apple has decided to kill lala.com. I loved that site. Now, everyone has to go back to paying $0.89+ for songs from Amazon, iTunes, etc. Lame. Rhapsody would be great, except there are no clients for Mac or Linux. They do have a web interface, buy it is nothing compared to lala's web 2.0y interface. What I just don't understand is, why is there no music API streaming service out there? Basically, developers could hook the service into any desktop or web app, and then users of the app could pay $x a month (like with Rhapsody) and play any amount of music, so long as their subscription is active. Why not? Lala streamed music to web browsers, so surely it could be as secure as lala is (was), preventing music theft.

    Read the article

  • How do I debug a .NET executable at MSIL-level?

    - by Eyal
    I have a .NET executable file that I need to debug. I would like to step into it so that it stops on the first instruction and have a visual interface for single-stepping, breakpoints, etc. This seems like it should be easier but I haven't yet found a solution! I read about DbgCLR.exe on the web but I can't find that file on my system or online for the life of me. I also read somewhere that DbgCLR.exe is no longer necessary because Visual Studio can do the same thing. A Visual Studio .NET solution would be great, too! (Maybe there's a menu item that I overlooked?) Either will suit, so long as I can inspect the stack, set breakpoints, etc.

    Read the article

  • Phonegap: Slow response with vibrate notification

    - by jjei
    I created a simple android application with jquery and phonegap. When testing the app with phone, I noticed that vibration effect, that I have used to indicate that user touches a button, comes after a delay of maybe 0,5 seconds. This is way too long delay and just confuses the user. Is this just the downside of using phonegap? Or is there any configuration or additional frameworks which could be used to make the app response and produce the vibration more quickly? I installed the vibration plugin like this: phonegap local plugin add https://git-wip-us.apache.org/repos/asf/cordova-plugin-vibration.git I use the code below to create the vibration effect. navigator.notification.vibrate(200); My phone gap version is 3.0.0-0.14.3

    Read the article

  • Displaying rows in multiple columns

    - by zizo
    Not sure if this is doable using sql alone or not, but here is the problem. I have a weird requirement that data needs to be displayed in columns so users can compare data quickly! Here is what the result set looks like right now CustomerID Company Active 001 ATT Y 002 ATT N 003 ATT Y 001 VZ Y 002 VZ N 003 VZ Y 001 TM Y 002 TM Y 003 TM Y Now this is how they want to see it CustomerID Company Active Company Active Company Active 001 ATT Y VZ Y TM Y 002 ATT N VZ N TM Y 003 ATT Y VZ Y TM Y Assumptions: This could be a pretty long table, that's why they want to see all companies on one row, rather than needing to scroll down to see if active or not. Nummber of companies is between 1-3 in most cases Any help is appreciated. Thanks!

    Read the article

  • Using non primitive types in ServiceOperation for WCF Data Service (3.5SP1)

    - by Nix
    Is there any way at all to create a "mock" entity type for use in a WCF Service Operation? We have some queries we do that we need to optimize by exposing as a ServiceOperation. The problem is in order to do so we would result in a very long list of primitative types... Ex SomeoneHelpMe(int time, string name, string address, string i, string purple, string foo, int stillGoing, int tooMany, etc...) And we really need to reduce this to SomeoneHelpedMe(CustomEntityNotMappedToAnything e) This would also help us when it comes time to write some complex queries since there is a 3 param limitation... I saw this will be possible in 4.0 using "complex types", but i am still in the 3.5SP1 world. Let me know if anyone needs more information.

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >