Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 927/3203 | < Previous Page | 923 924 925 926 927 928 929 930 931 932 933 934  | Next Page >

  • image scaling with C

    - by sa125
    Hi - I'm trying to read an image file and scale it by multiplying each byte by a scale its pixel levels by some absolute factor. I'm not sure I'm doing it right, though - void scale_file(char *infile, char *outfile, float scale) { // open files for reading FILE *infile_p = fopen(infile, 'r'); FILE *outfile_p = fopen(outfile, 'w'); // init data holders char *data; char *scaled_data; // read each byte, scale and write back while ( fread(&data, 1, 1, infile_p) != EOF ) { *scaled_data = (*data) * scale; fwrite(&scaled_data, 1, 1, outfile); } // close files fclose(infile_p); fclose(outfile_p); } What gets me is how to do each byte multiplication (scale is 0-1.0 float) - I'm pretty sure I'm either reading it wrong or missing something big. Also, data is assumed to be unsigned (0-255). Please don't judge my poor code :) thanks

    Read the article

  • How to modify by loading and saving the same xml file

    - by cynthia hong
    I have a problem with modifying xml file when i first load and then save it with same file path and name. Below is my code. The error is "Access to the path C:\MyApp\Web.config is denied. If i change the path of the xdoc.Save to be different from xdoc.Load, then it will be ok. What is your recommandation to solve this problem? If possible, i need to modify the existing xml file(meaning xml file for loading and saving is the same path). XmlDocument xdoc = new XmlDocument(); xdoc.Load(@"C:\\MyApp\\Web.config"); XmlNode xn = xdoc.SelectSingleNode("//configuration/MyProvider"); XmlElement el = (XmlElement)xn; el.SetAttribute("defaultProvider", "MyCustomValue"); xdoc.Save(@"C:\\MyApp\\Web.config"); Thanks in advance.

    Read the article

  • Imaging: Paper Paper Everywhere, but None Should be in Sight

    - by Kellsey Ruppel
    Author: Vikrant Korde, Technical Architect, Aurionpro's Oracle Implementation Services team My wedding photos are stored in several empty shoeboxes. Yes...I got married before digital photography was mainstream...which means I'm old. But my parents are really old. They have shoeboxes filled with vacation photos on slides (I doubt many of you have even seen a home slide projector...and I hope you never do!). Neither me nor my parents should have shoeboxes filled with any form of photographs whatsoever. They should obviously live in the digital world...with no physical versions in sight (other than a few framed on our walls). Businesses grapple with similar challenges. But instead of shoeboxes, they have file cabinets and warehouses jam packed with paper invoices, legal documents, human resource files, material safety data sheets, incident reports, and the list goes on and on. In fact, regulatory and compliance rules govern many industries, requiring that this paperwork is available for any number of years. It's a real challenge...especially trying to find archived documents quickly and many times with no backup. Which brings us to a set of technologies called Image Process Management (or simply Imaging or Image Processing) that are transforming these antiquated, paper-based processes. Oracle's WebCenter Content Imaging solution is a combination of their WebCenter suite, which offers a robust set of content and document management features, and their Business Process Management (BPM) suite, which helps to automate business processes through the definition of workflows and business rules. Overall, the solution provides an enterprise-class platform for end-to-end management of document images within transactional business processes. It's a solution that provides all of the capabilities needed - from document capture and recognition, to imaging and workflow - to effectively transform your ‘shoeboxes’ of files into digitally managed assets that comply with strict industry regulations. The terminology can be quite overwhelming if you're new to the space, so we've provided a summary of the primary components of the solution below, along with a short description of the two paths that can be executed to load images of scanned documents into Oracle's WebCenter suite. WebCenter Imaging (WCI): the electronic document repository that provides security, annotations, and search capabilities, and is the primary user interface for managing work items in the imaging solution SOA & BPM Suites (workflow): provide business process management capabilities, including human tasks, workflow management, service integration, and all other standard SOA features. It's interesting to note that there a number of 'jumpstart' processes available to help accelerate the integration of business applications, such as the accounts payable invoice processing solution for E-Business Suite that facilitates the processing of large volumes of invoices WebCenter Enterprise Capture (WEC): expedites the capture process of paper documents to digital images, offering high volume scanning and importing from email, and allows for flexible indexing options WebCenter Forms Recognition (WFR): automatically recognizes, categorizes, and extracts information from paper documents with greatly reduced human intervention WebCenter Content: the backend content server that provides versioning, security, and content storage There are two paths that can be executed to send data from WebCenter Capture to WebCenter Imaging, both of which are described below: 1. Direct Flow - This is the simplest and quickest way to push an image scanned from WebCenter Enterprise Capture (WEC) to WebCenter Imaging (WCI), using the bare minimum metadata. The WEC activities are defined below: The paper document is scanned (or imported from email). The scanned image is indexed using a predefined indexing profile. The image is committed directly into the process flow 2. WFR (WebCenter Forms Recognition) Flow - This is the more complex process, during which data is extracted from the image using a series of operations including Optical Character Recognition (OCR), Classification, Extraction, and Export. This process creates three files (Tiff, XML, and TXT), which are fed to the WCI Input Agent (the high speed import/filing module). The WCI Input Agent directory is a standard ingestion method for adding content to WebCenter Imaging, the process for doing so is described below: WEC commits the batch using the respective commit profile. A TIFF file is created, passing data through the file name by including values separated by "_" (underscores). WFR completes OCR, classification, extraction, export, and pulls the data from the image. In addition to the TIFF file, which contains the document image, an XML file containing the extracted data, and a TXT file containing the metadata that will be filled in WCI, are also created. All three files are exported to WCI's Input agent directory. Based on previously defined "input masks", the WCI Input Agent will pick up the seeding file (often the TXT file). Finally, the TIFF file is pushed in UCM and a unique web-viewable URL is created. Based on the mapping data read from the TXT file, a new record is created in the WCI application.  Although these processes may seem complex, each Oracle component works seamlessly together to achieve a high performing and scalable platform. The solution has been field tested at some of the largest enterprises in the world and has transformed millions and millions of paper-based documents to more easily manageable digital assets. For more information on how an Imaging solution can help your business, please contact [email protected] (for U.S. West inquiries) or [email protected] (for U.S. East inquiries). About the Author: Vikrant is a Technical Architect in Aurionpro's Oracle Implementation Services team, where he delivers WebCenter-based Content and Imaging solutions to Fortune 1000 clients. With more than twelve years of experience designing, developing, and implementing Java-based software solutions, Vikrant was one of the founding members of Aurionpro's WebCenter-based offshore delivery team. He can be reached at [email protected].

    Read the article

  • Exporting XML from FIleMaker Pro Server

    - by Jeno
    FileMaker Pro 10 Server: Mac OS X Server 10.4.11 DAtA Server: Windows Server 2008 I am having problem cross platform issue when exporting XML from FileMaker Pro client on a Mac to DATA Server. My FileMaker Pro server is hosting on a Mac OS X and the I need user to export their data to a DATA server which is hosting on a Windows Server. I created a button(function/script) in the FileMaker form for user to export data once they finish with their job. FileMaker Pro client on the PC works perfectly but It does't work on the MAC. I've tried every combination I can think of for the location path as documented on: http://www.filemaker.com/11help/html/create_db.8.32.html#1030283 Any idea? Thanks

    Read the article

  • Using SQLLite transactions I/O Error

    - by james.ingham
    I currently have a client / server setup where the client sends data to the server and then the server saves the data to a SQLite database file. To do this I am using transactions which works fine in windows 7 when I run around 30 clients (each client sending data back between 5 - 30 seconds). When using the same software in Windows XP, I can get/set data multiple times with no problems until I run around 20 clients I start to get Windows Delayed wrote failed errors: This fires an exception on the server: I'm assuming this is either something to do with XP or a hardware issue on the machine i'm running XP. Does anyone have any advice to avoid this? Or if I should just catch the exception and retry saving the data? Thanks

    Read the article

  • Matching n parentheses in perl regex

    - by coding_hero
    Hi, I've got some data that I'm parsing in Perl, and will be adding more and more differently formatted data in the near future. What I would like to do is write an easy-to-use function, that I could pass a string and a regex to, and it would return anything in parentheses. It would work something like this (pseudocode): sub parse { $data = shift; $regex = shift; $data =~ eval ("m/$regex/") foreach $x ($1...$n) { push (@ra, $x); } return \@ra; } Then, I could call it like this: @subs = parse ($data, '^"([0-9]+)",([^:]*):(\W+):([A-Z]{3}[0-9]{5}),ID=([0-9]+)'); As you can see, there's a couple of issues with this code. I don't know if the eval would work, the 'foreach' definitely wouldn't work, and without knowing how many parentheses there are, I don't know how many times to loop. This is too complicated for split, so if there's another function or possibility that I'm overlooking, let me know. Thanks for your help!

    Read the article

  • ByteStrings in Haskell

    - by Jon
    So i am trying to write a program that can read in a java class file as bytecode. For this i am using Data.Binary and Data.ByteStream. The problem i am having is because im pretty new to Haskell i am having trouble actually using these tools. module Main where import Data.Binary.Get import Data.Word import qualified Data.ByteString.Lazy as S getBinary :: Get Word8 getBinary = do a <- getWord8 return (a) main :: IO () main = do contents <- S.getContents print (getBinary contents) This is what i have come up with so far and i fear that its not really even on the right track. Although i know this question is very general i would appreciate some help with what i should be doing with the reading.

    Read the article

  • How (and where) to get aligned tRNA sequences (and import it into R)

    - by Tal Galili
    (This is a database / R commands question) I wish (for my thesis work), to import tRNA data into R and have it aligned. My questions are: 1) What resources can I use for the data. 2) What commands might help me with the import/alignment. So far, I found two nice repositories that holds such data: http://trnadb.bioinf.uni-leipzig.de/Resulthttp://trnadb.bioinf.uni-leipzig.de/Result http://gtrnadb.ucsc.edu/download.htmlhttp://gtrnadb.ucsc.edu/download.html And also the readFASTA command from Biostrings, that does basic importing of the data into R. My problem still remains with how to handle the alignment of the tRNA. Since I am not from the field, I might be missing a very basic answer (like where I should download the data from, or what command to use). If you might be willing to advice me, that would be most helpful. Many thanks in advance, Tal

    Read the article

  • Get correct output from UTF-8 stored in VarChar using Entity Framework or Linq2SQL?

    - by jasonpenny
    Borland StarTeam seems to store its data as UTF-8 encoded data in VarChar fields. I have an ASP.NET MVC site that returns some custom HTML reports using the StarTeam database, and I would like to find a better solution for getting the correct data, for a rewrite with MVC2. I tried a few things with Encoding GetBytes and GetString, but I couldn't get it to work (we use mostly Delphi at work); then I figured out a T-SQL function to return a NVarChar from UTF-8 stored in a VarChar, and created new SQL views which return the data as NVarChar, but it's slow. The actual problem appears like this: “description†instead of “description”, in both SSMS and in a webpage when using Linq2SQL Is there a way to get the proper data out of these fields using Entity Framework or Linq2SQL?

    Read the article

  • IRb: how to start an interactive ruby session with pre-loaded classes

    - by Shyam
    Hi, As I am going through my journey by adopting the Ruby language, I spend a lot of time inside IRb. It's just fantastic! But, as I am not very aware of it's capabilities, and still a nuby with Ruby, I would like to know the following: How can I 'flush' the session, without restarting IRb (or is this not possible). How can I configure IRb to load a bunch of source files "hello.rb" and "hello_objects.rb", i.e. at startup? I am heavily working in these and it would be nice to know a short hand to load these classes, without manually typing 'load' for each again. Thank you for your answers, comments and feedback!

    Read the article

  • loading and formating the rtf file in uiwebview

    - by Jayshree
    Hello everybody. I am trying to load the contents of a rtf file in the uiwebview. I am successful in loading the contents, but it is unformatted. the fonts are too large and it doesnt have any format. it displays the color of the rtf content, but not the font or size. So what should i do now. is there anyother way to load the rtf and format it???? I load the rtf in following way: NSString *path = [[NSBundle mainBundle] pathForResource:@"Home" ofType:@"rtf"]; NSURL *url=[NSURL fileURLWithPath:path]; NSURLRequest *req=[NSURLRequest requestWithURL:url]; [menuWeb loadRequest:req]; So what should i do now. can anybody help me????

    Read the article

  • Machine Learning Algorithm for Peer-to-Peer Nodes

    - by FreshCode
    I want to apply machine learning to a classification problem in a parallel environment. Several independent nodes, each with multiple on/off sensors, can communicate their sensor data with the goal of classifying an event as defined by a heuristic, training data or both. Each peer will be measuring the same data from their unique perspective and will attempt to classify the result while taking into account that any neighbouring node (or its sensors or just the connection to the node) could be faulty. Nodes should function as equal peers and determine the most likely classification by communicating their results. Ultimately each node should make a decision based on their own sensor data and their peers' data. If it matters, false positives are OK for certain classifications (albeit undesirable) but false negatives would be totally unacceptable. Given that each final classification will receive good or bad feedback, what would be an appropriate machine learning algorithm to approach this problem with if the nodes could communicate with each other to determine the most likely classification?

    Read the article

  • pyODBC and Unicode Problem

    - by Aviv Giladi
    Hey guys, I'm working with pyODBC communicate with a MS SQL 2005 Express server. The table to which i'm trying to save the data consists of nvarchar columns. query = u"INSERT INTO tblPersons (name, birthday, gender) VALUES('" query = query + name + u"', '" query = query + birthday + u"', '" query = query + gender + u"')" cur.execute(query ) The variables name, birthrday and gende are read from an Excel file and they are Unicode strings. When I execute the query and either look at the table with SQL Server Management Studio or execute a query that fetches the data that was just inserted, all the data that was written in a non-English languages turn into question marks. The data that was written in English is preserved and appears in the table in the correct way. I tried adding CHARSET=UTF16 to my connection string, but had no luck with that. I can use UTF-8 which works fine but as a working convention, I need all the data saved in my DB to be UTF16. Thanks!

    Read the article

  • NSData to NSString by changing the value null is returned. I need you help

    - by kevin
    *cipher.h, cipher.m all code : http://watchitlater.com/blog/2010/02/java-and-iphone-aes-interoperability Cipher.m -(NSData *)encrypt:(NSData *)plainText{ return [self transform:KCCEncrypt data:plainText; } step1. Cipher *cipher = [[Cipher alloc]initWithKey:@"1234567890"]; NSData *input = [@"kevin" dataUsingEncoding:NSUTF8StringEncoding]; NSData *data = [cipher encrypt:input]; data variables NSLog print : <4d1c4d7f 1592718c fd588cec 84053e35 step2. NSString *changeVal = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding]; data variables NSLog print : null NSData to NSString by changing the value null is returned. By converting NSString NSURLConnection want to transfer. I need you help

    Read the article

  • Answers to Your Common Oracle Database Lifecycle Management Questions

    - by Scott McNeil
    We recently ran a live webcast on Strategies for Managing Oracle Database's Lifecycle. There were tons of questions from our audience that we simply could not get to during the hour long presentation. Below are some of those questions along with their answers. Enjoy! Question: In the webcast the presenter talked about “gold” configuration standards, for those who want to use this technique, could you recommend a best practice to consider or follow? How do I get started? Answer:Gold configuration standardization is a quick and easy way to improve availability through consistency. Start by choosing a reference database and saving the configuration to the Oracle Enterprise Manager repository using the Save Configuration feature. Next create a comparison template using the Oracle provided template as a starting point and modify the ignored properties to eliminate expected differences in your environment. Finally create a comparison specification using the comparison template you created plus your saved gold configuration and schedule it to run on a regular basis. Don’t forget to fill in the email addresses of those you want to notify upon drift detection. Watch the database configuration management demo to learn more. Question: Can Oracle Lifecycle Management Pack for Database help with patching an Oracle Real Application Cluster (RAC) environment? Answer: Yes, Oracle Enterprise Manager supports both parallel and rolling patch application of Oracle Real Application Clusters. The use of rolling patching is recommended as there is no downtime involved. For more details watch this demo. Question: What are some of the things administrators can do to control configuration drift? Why is it important? Answer:Configuration drift is one of the main causes of instability and downtime of applications. Oracle Enterprise Manager makes it easy to manage and control drift using scheduled configuration comparisons combined with comparison templates. Question: Does Oracle Enterprise Manager 12c Release 2 offer an incremental update feature for "gold" images? For instance, if the source binary has a higher PSU level, what is the best approach to update the existing "gold" image in the software library? Do you have to create a new image or can you just update the original one? Answer:Provisioning Profiles (Gold images) can contain the installation files and database configuration templates. Although it is possible to make some changes to the profile after creation (mainly to configuration), it is normally recommended to simply create a new profile after applying a patch to your reference database. Question: The webcast talked about enforcing in-house standards, does Oracle Enterprise Manager 12c offer verification of your databases and systems to those standards? For example, the initial "gold" image has been massively deployed over time, and there may be some changes to it. How can you do regular checks from Enterprise Manager to ensure the in-house standards are being enforced? Answer:There are really two methods to validate conformity to standards. The first method is to use gold standards which you compare other databases to report unwanted differences. This method uses a new comparison template technology which allows users to ignore known differences (i.e. SID, Start time, etc) which results in a report only showing important or non-conformant differences. This method is quick to setup and configure and recommended for those who want to get started validating compliance quickly. The second method leverages the new compliance framework which allows the creation of specific and robust validations. These compliance rules are grouped into standards which can be assigned to databases quickly and easily. Compliance rules allow for targeted and more sophisticated validation beyond the basic equals operation available in the comparison method. The compliance framework can be used to implement just about any internal or industry standard. The compliance results will track current and historic compliance scores at the overall and individual database targets. When the issue is resolved, the score is automatically affected. Compliance framework is the recommended long term solution for validating compliance using Oracle Enterprise Manager 12c. Check out this demo on database compliance to learn more. Question: If you are using the integration between Oracle Enterprise Manager and My Oracle Support in an "offline" mode, how do you know if you have the latest My Oracle Support metadata? Answer:In Oracle Enterprise Manager 12c Release 2, you now only need to download one zip file containing all of the metadata xmls files. There is no indication that the metadata has changed but you could run a checksum on the file and compare it to the previously downloaded version to see if it has changed. Question: What happens if a patch fails while administrators are applying it to a database or system? Answer:A large portion of Oracle Enterprise Manager's patch automation is the pre-requisite checks that happen to ensure the highest level of confidence the patch will successfully apply. It is recommended you test the patch in a non-production environment and save the patch plan as a template once successful so you can create new plans using the saved template. If you are using the recommended ‘out of place’ patching methodology, there is no urgency because the database is still running as the cloned Oracle home is being patched. Users can address the issue and restart the patch procedure at the point it left off. If you are using 'in place' method, you can address the issue and continue where the procedure left off. Question: Can Oracle Enterprise Manager 12c R2 compare configurations between more than one target at the same time? Answer:Oracle Enterprise Manager 12c can compare any number of target configurations at one time. This is the basis of many important use cases including Configuration Drift Management. These comparisons can also be scheduled on a regular basis and emails notification sent should any differences appear. To learn more about configuration search and compare watch this demo. Question: How is data comparison done since changes are taking place in a live production system? Answer:There are many things to keep in mind when using the data comparison feature (as part of the Change Management ability to compare table data). It was primarily intended to be used for maintaining consistency of important but relatively static data. For example, application seed data and application setup configuration. This data does not change often but is critical when testing an application to ensure results are consistent with production. It is not recommended to use data comparison on highly dynamic data like transactional tables or very large tables. Question: Which versions of Oracle Database can be monitored through Oracle Enterprise Manager 12c? Answer:Oracle Database versions: 9.2.0.8, 10.1.0.5, 10.2.0.4, 10.2.0.5, 11.1.0.7, 11.2.0.1, 11.2.0.2, 11.2.0.3. Watch the On-Demand Webcast Stay Connected: Twitter | Facebook | YouTube | Linkedin | NewsletterDownload the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Flex TileList control, image loading issue

    - by ckenan
    I have a flex 3 TileList in wich a load several image (employee's headshot pictures). The image I'm loading in the TileList are stored in a DataBase (I use the ByteArray class and a Base 64 encoding to store the images in the DB). When I load the images in the TileList from the DB, there is no problem they are displayed correctly, but when I scroll down in the TileList and scroll up again, the position of the images is changing, so for example the image in first position can be now in the 3rd and so on .... Does somebody knows how to fix that ? Thanks in advance! PS : Here is the code of the ItemRenderer for the TileList private function init():void { img.load(data.imageData); } ]]

    Read the article

  • Using Ruby Hash instead of Rails ActiveRecord in Coffeescript / Morris.JS

    - by Vanessa L'olzorz
    I'm following the Railscast #223 that introduced Morris.JS. I generate a data set called @orders_yearly in my controller and in my view I have the following to try and render the graph: <%= content_tag :div, "", id: "orders_chart", data: {orders: @orders_yearly} %> Calling @orders_yearly.inspect shows it's just a simple hash: {2009=>1000, 2010=>2000, 2011=>4000, 2012=>100000} I'll need to modify the values for xkey and ykeys in coffeescript to work, but I'm not sure how to make it work with my data set: jQuery -> Morris.Line element: 'orders_chart' data: $('#orders_chart').data('orders') xkey: 'purchased_at' # <------------------ replace with what? ykeys: ['price'] # <---------------------- replace with what? labels: ['Price'] Anyone have any ideas? Thanks!

    Read the article

  • How to compile scheme into native binary files ?

    - by Joe
    I am very new to scheme. And now I am trying to compile some scheme code into binary file which will be loaded faster into interpreter. (The interpreter is a hybrid interpreter)Some one told me that I can compile the code into native binary file and then load it into interperter. And my question is: 1. What is the native binary file? 2. How can I compile the scheme code into a native binary file? 3. How can I load native bianry file into scheme interpreter? Thanks in advance. Joe Suggested that I want to compile below code into native binary file: (define test (lambda() (display "this is a test")) And then load the bianry file into interpreter and call the function "test".

    Read the article

  • NServiceBus pipeline with Distributors

    - by David
    I'm building a processing pipeline with NServiceBus but I'm having trouble with the configuration of the distributors in order to make each step in the process scalable. Here's some info: The pipeline will have a master process that says "OK, time to start" for a WorkItem, which will then start a process like a flowchart. Each step in the flowchart may be computationally expensive, so I want the ability to scale out each step. This tells me that each step needs a Distributor. I want to be able to hook additional activities onto events later. This tells me I need to Publish() messages when it is done, not Send() them. A process may need to branch based on a condition. This tells me that a process must be able to publish more than one type of message. A process may need to join forks. I imagine I should use Sagas for this. Hopefully these assumptions are good otherwise I'm in more trouble than I thought. For the sake of simplicity, let's forget about forking or joining and consider a simple pipeline, with Step A followed by Step B, and ending with Step C. Each step gets its own distributor and can have many nodes processing messages. NodeA workers contain a IHandleMessages processor, and publish EventA NodeB workers contain a IHandleMessages processor, and publish Event B NodeC workers contain a IHandleMessages processor, and then the pipeline is complete. Here are the relevant parts of the config files, where # denotes the number of the worker, (i.e. there are input queues NodeA.1 and NodeA.2): NodeA: <MsmqTransportConfig InputQueue="NodeA.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeA.Distrib.Control" DistributorDataAddress="NodeA.Distrib.Data" > <MessageEndpointMappings> </MessageEndpointMappings> </UnicastBusConfig> NodeB: <MsmqTransportConfig InputQueue="NodeB.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeB.Distrib.Control" DistributorDataAddress="NodeB.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventA, Messages" Endpoint="NodeA.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> NodeC: <MsmqTransportConfig InputQueue="NodeC.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeC.Distrib.Control" DistributorDataAddress="NodeC.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventB, Messages" Endpoint="NodeB.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> And here are the relevant parts of the distributor configs: Distributor A: <add key="DataInputQueue" value="NodeA.Distrib.Data"/> <add key="ControlInputQueue" value="NodeA.Distrib.Control"/> <add key="StorageQueue" value="NodeA.Distrib.Storage"/> Distributor B: <add key="DataInputQueue" value="NodeB.Distrib.Data"/> <add key="ControlInputQueue" value="NodeB.Distrib.Control"/> <add key="StorageQueue" value="NodeB.Distrib.Storage"/> Distributor C: <add key="DataInputQueue" value="NodeC.Distrib.Data"/> <add key="ControlInputQueue" value="NodeC.Distrib.Control"/> <add key="StorageQueue" value="NodeC.Distrib.Storage"/> I'm testing using 2 instances of each node, and the problem seems to come up in the middle at Node B. There are basically 2 things that might happen: Both instances of Node B report that it is subscribing to EventA, and also that NodeC.Distrib.Data@MYCOMPUTER is subscribing to the EventB that Node B publishes. In this case, everything works great. Both instances of Node B report that it is subscribing to EventA, however, one worker says NodeC.Distrib.Data@MYCOMPUTER is subscribing TWICE, while the other worker does not mention it. In the second case, which seem to be controlled only by the way the distributor routes the subscription messages, if the "overachiever" node processes an EventA, all is well. If the "underachiever" processes EventA, then the publish of EventB has no subscribers and the workflow dies. So, my questions: Is this kind of setup possible? Is the configuration correct? It's hard to find any examples of configuration with distributors beyond a simple one-level publisher/2-worker setup. Would it make more sense to have one central broker process that does all the non-computationally-intensive traffic cop operations, and only sends messages to processes behind distributors when the task is long-running and must be load balanced? Then the load-balanced nodes could simply reply back to the central broker, which seems easier. On the other hand, that seems at odds with the decentralization that is NServiceBus's strength. And if this is the answer, and the long running process's done event is a reply, how do you keep the Publish that enables later extensibility on published events?

    Read the article

  • Highstock set different colors for different markers

    - by user1651804
    I am tryting to develop a chart in Highstock where each marker got an individual color. When i push my Data in an Array like this : series.data.push([ parseInt(Math.round(myDate.getTime())), parseInt(Math.round(myDate.getHours())) ]); The Data is shown correctly, but when i try to set the color within each data-object like this, it doesn't work to show points. series.data.push([ { x: parseInt(Math.round(myDate.getTime())), y: parseInt(Math.round(myDate.getHours())), marker:{fillColor: 'red'}} ]); What am i missing? Update: When i inspect it with firebug i can see that the series is set correctly within the chart object. so why does it now show up? :/

    Read the article

  • How to make Excel strip ALL quotes from CSV text fields

    - by Klay
    When importing a CSV file into Excel, it only strips the double-quotes from the FIRST field on the line, but leaves them on all other fields. How can I force Excel to strip the quotes from ALL strings? For instance, I have a CSV file: "text1", "text2", "numeric1", "numeric 2" "abc", "def", 123, 456 "abc", "def", 123, 456 "abc", "def", 123, 456 "abc", "def", 123, 456 I import it into Excel using Data Import External Data Import Data. I specify that the fields are delimited by commas, and that the text delimiter is the double-quote character. Both the data preview and the actual Excel spreadsheet columns only strip the double-quotes from the first text field. All other text fields still have quotes around them. What's really strange is that Access is able to import this data correctly (i.e. strips quotes from every text field. Note that this is NOT a matter of internal commas or quotes or escape characters. This happens in Excel 2003 and Excel 2007.

    Read the article

  • Oracle Tutor: Top 10 to Implement Sustainable Policies and Procedures

    - by emily.chorba(at)oracle.com
    Overview Your organization (executives, managers, and employees) understands the value of having written business process documents (process maps, procedures, instructions, reference documents, and form abstracts). Policies and procedures should be documented because they help to reduce the range of individual decisions and encourage management by exception: the manager only needs to give special attention to unusual problems, not covered by a specific policy or procedure. As more and more procedures are written to cover recurring situations, managers will begin to make decisions which will be consistent from one functional area to the next.Companies should take a project management approach when implementing an environment for a sustainable documentation program and do the following:1. Identify an Executive Champion2. Put together a winning team3. Assign ownership4. Centralize publishing5. Establish the Document Maintenance Process Up Front6. Document critical activities only7. Document actual practice8. Minimize documentation9. Support continuous improvement10. Keep it simple 1. Identify an Executive ChampionAppoint a top down driver. Select one key individual to be a mentor for the procedure planning team. The individual should be a senior manager, such as your company president, CIO, CFO, the vice-president of quality, manufacturing, or engineering. Written policies and procedures can be important supportive aids when known to express the thinking for the chief executive officer and / or the president and to have his or her full support. 2. Put Together a Winning TeamChoose a strong Project Management Leader and staff the procedure planning team with management members from cross functional groups. Make sure team members have the responsibility - and the authority - to make things happen.The winning team should consist of the Documentation Project Manager, Document Owners (one for each functional area), a Document Controller, and Document Specialists (as needed). The Tutor Implementation Guide has complete job descriptions for these roles. 3. Assign Ownership It is virtually impossible to keep process documentation simple and meaningful if employees who are far removed from the activity itself create it. It is impossible to keep documentation up-to-date when responsibility for the document is not clearly understood.Key to the Tutor methodology, therefore, is the concept of ownership. Each document has a single owner, who is responsible for ensuring that the document is necessary and that it reflects actual practice. The owner must be a person who is knowledgeable about the activity and who has the authority to build consensus among the persons who participate in the activity as well as the authority to define or change the way an activity is performed. The owner must be an advocate of the performers and negotiate, not dictate practices.In the Tutor environment, a document's owner is the only person with the authority to approve an update to that document. 4. Centralize Publishing Although it is tempting (especially in a networked environment and with document management software solutions) to decentralize the control of all documents -- with each owner updating and distributing his own -- Tutor promotes centralized publishing by assigning the Document Administrator (gate keeper) to manage the updates and distribution of the procedures library. 5. Establish a Document Maintenance Process Up Front (and stick to it) Everyone in your organization should know they are invited to suggest changes to procedures and should understand exactly what steps to take to do so. Tutor provides a set of procedures to help your company set up a healthy document control system. There are many document management products available to automate some of the document change and maintenance steps. Depending on the size of your organization, a simple document management system can reduce the effort it takes to track and distribute document changes and updates. Whether your company decides to store the written policies and procedures on a file server or in a database, the essential tasks for maintaining documents are the same, though some tasks are automated. 6. Document Critical Activities Only The best way to keep your documentation simple is to reduce the number of process documents to a bare minimum and to include in those documents only as much detail as is absolutely necessary. The first step to reducing process documentation is to document only those activities that are deemed critical. Not all activities require documentation. In fact, some critical activities cannot and should not be standardized. Others may be sufficiently documented with an instruction or a checklist and may not require a procedure. A document should only be created when it enhances the performance of the employee performing the activity. If it does not help the employee, then there is no reason to maintain the document. Activities that represent little risk (such as project status), activities that cannot be defined in terms of specific tasks (such as product research), and activities that can be performed in a variety of ways (such as advertising) often do not require documentation. Sometimes, an activity will evolve to the point where documentation is necessary. For example, an activity performed by single employee may be straightforward and uncomplicated -- that is, until the activity is performed by multiple employees. Sometimes, it is the interaction between co-workers that necessitates documentation; sometimes, it is the complexity or the diversity of the activity.7. Document Actual Practices The only reason to maintain process documentation is to enhance the performance of the employee performing the activity. And documentation can only enhance performance if it reflects reality -- that is, current best practice. Documentation that reflects an unattainable ideal or outdated practices will end up on the shelf, unused and forgotten.Documenting actual practice means (1) auditing the activity to understand how the work is really performed, (2) identifying best practices with employees who are involved in the activity, (3) building consensus so that everyone agrees on a common method, and (4) recording that consensus.8. Minimize Documentation One way to keep it simple is to document at the highest level possible. That is, include in your documents only as much detail as is absolutely necessary.When writing a document, you should ask yourself, What is the purpose of this document? That is, what problem will it solve?By focusing on this question, you can target the critical information.• What questions are the end users likely to have?• What level of detail is required?• Is any of this information extraneous to the document's purpose? Short, concise documents are user friendly and they are easier to keep up to date. 9. Support Continuous Improvement Employees who perform an activity are often in the best position to identify improvements to the process. In other words, continuous improvement is a natural byproduct of the work itself -- but only if the improvements are communicated to all employees who are involved in the process, and only if there is consensus among those employees.Traditionally, process documentation has been used to dictate performance, to limit employees' actions. In the Tutor environment, process documents are used to communicate improvements identified by employees. How does this work? The Tutor methodology requires a process document to reflect actual practice, so the owner of a document must routinely audit its content -- does the document match what the employees are doing? If it doesn't, the owner has the responsibility to evaluate the process, to build consensus among the employees, to identify "best practices," and to communicate these improvements via a document update. Continuous improvement can also be an outgrowth of corrective action -- but only if the solutions to problems are communicated effectively. The goal should be to solve a problem once and only once, which means not only identifying the solution, but ensuring that the solution becomes part of the process. The Tutor system provides the method through which improvements and solutions are documented and communicated to all affected employees in a cost-effective, timely manner; it ensures that improvements are not lost or confined to a single employee. 10. Keep it Simple Process documents don't have to be complex and unfriendly. In fact, the simpler the format and organization, the more likely the documents will be used. And the simpler the method of maintenance, the more likely the documents will be kept up-to-date. Keep it simply by:• Minimizing skills and training required• Following the established Tutor document format and layout• Avoiding technology just for technology's sake No other rule has as major an impact on the success of your internal documentation as -- keep it simple. Learn More For more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum.   Emily Chorba Principle Product Manager Oracle Tutor & BPM 

    Read the article

  • write() causes fatal crash when filedescriptor becomes invalid

    - by ckrames1234
    I'm writing an iPhone App with a webserver in it. To handle a web request, I take the web request and write() to it the data that I want to send back. When I try to download a moderately sized file (3-6MB) it works fine, but if I cancel the download halfway through, the app crashes and leaves no trace of an error. I'm thinking that the file descriptor becomes invalid halfway through the write, and causes the crash. I really don't know if this is what causes the crash, i'm just assuming. I'm basing my webserver off of this example. NSString *header = @""; NSData *data = [NSData dataWithContentsOfFile:fullPath]; write (fd, [header UTF8String], [header length]); write(fd, [data bytes], [data length]); close(fd); Does anyone know how to fix this? I was thinking about chunking the data and then writing each part, but I don't think it would help.

    Read the article

  • Jquery AJAX and figuring out how NOT to nest callbacks for multiple reloads.

    - by David H
    I have a site with a few containers that need to load content dynamically via AJAX. Most of the links inside of the containers actually load new data in that same container. I.E: $("#navmenu_item1").click(function() { $("#navmenu").load(blah); }); When I click on a link, lets say "logout," and the menu reloads with the updated login status, none of the links now work because I would need to add a new .click into the callback code. Now with repeated clicks and AJAX requests I am not sure how to move forward from here, as I am sure nested callbacks are not optimal. What would be the best way to do this? Do I include the .click event inside the actual AJAX file being pulled from the server, instead of within index.php? Or can I somehow reload the DOM? Thanks for the help!

    Read the article

  • Machine Learning Algorithm for Parallel Nodes

    - by FreshCode
    I want to apply machine learning to a classification problem in a parallel environment. Several independent nodes, each with multiple on/off sensors, can communicate their sensor data with the goal of classifying an event defined by a heuristic, training data or both. Each peer will be measuring the same data from their unique perspective and will attempt to classify the result while taking into account that any neighbouring node (or its sensors or just the connection to the node) could be faulty. Nodes should function as equal peers and determine the most likely classification by communicating their results. Ultimately each node should make a decision based on their own sensor data and their peers' data. If it matters, false positives are OK (albeit undesirable) but false negatives are totally unacceptable. Given that each final classification will receive good or bad feedback, what would be an appropriate machine learning algorithm to approach this problem with if the nodes could communicate with each other to determine the most likely classification?

    Read the article

< Previous Page | 923 924 925 926 927 928 929 930 931 932 933 934  | Next Page >