Search Results

Search found 25579 results on 1024 pages for 'complex event processing'.

Page 762/1024 | < Previous Page | 758 759 760 761 762 763 764 765 766 767 768 769  | Next Page >

  • Building static (but complicated) lookup table using templates.

    - by MarkD
    I am currently in the process of optimizing a numerical analysis code. Within the code, there is a 200x150 element lookup table (currently a static std::vector < std::vector < double ) that is constructed at the beginning of every run. The construction of the lookup table is actually quite complex- the values in the lookup table are constructed using an iterative secant method on a complicated set of equations. Currently, for a simulation, the construction of the lookup table is 20% of the run time (run times are on the order of 25 second, lookup table construction takes 5 seconds). While 5-seconds might not seem to be a lot, when running our MC simulations, where we are running 50k+ simulations, it suddenly becomes a big chunk of time. Along with some other ideas, one thing that has been floated- can we construct this lookup table using templates at compile time? The table itself never changes. Hard-coding a large array isn't a maintainable solution (the equations that go into generating the table are constantly being tweaked), but it seems that if the table can be generated at compile time, it would give us the best of both worlds (easily maintainable, no overhead during runtime). So, I propose the following (much simplified) scenario. Lets say you wanted to generate a static array (use whatever container suits you best- 2D c array, vector of vectors, etc..) at compile time. You have a function defined- double f(int row, int col); where the return value is the entry in the table, row is the lookup table row, and col is the lookup table column. Is it possible to generate this static array at compile time using templates, and how?

    Read the article

  • Do database engines other than SQL Server behave this way?

    - by Yishai
    I have a stored procedure that goes something like this (pseudo code) storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin select * from SOME_VIEW order by somecolumn end else if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent end All ran well for a long time. Suddenly, the query started running forever if param4 was 'Y'. Changing the code to this: storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin set param2 = null set param3 = null end if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent And it runs again within expected parameters (15 seconds or so for 40,000+ records). This is with SQL Server 2005. The gist of my question is this particular "feature" specific to SQL Server, or is this a common feature among RDBMS' in general that: Queries that ran fine for two years just stop working as the data grows. The "new" execution plan destroys the ability of the database server to execute the query even though a logically equivalent alternative runs just fine? This may seem like a rant against SQL Server, and I suppose to some degree it is, but I really do want to know if others experience this kind of reality with Oracle, DB2 or any other RDBMS. Although I have some experience with others, I have only seen this kind of volume and complexity on SQL Server, so I'm curious if others with large complex databases have similar experience in other products.

    Read the article

  • How do I serialise a graph in Java without getting StackOverflowException?

    - by Tim Cooper
    I have a graph structure in java, ("graph" as in "edges and nodes") and I'm attempting to serialise it. However, I get "StackOverflowException", despite significantly increasing the JVM stack size. I did some googling, and apparently this is a well known limitation of java serialisation: that it doesn't work for deeply nested object graphs such as long linked lists - it uses a stack record for each link in the chain, and it doesn't do anything clever such as a breadth-first traversal, and therefore you very quickly get a stack overflow. The recommended solution is to customise the serialisation code by overriding readObject() and writeObject(), however this seems a little complex to me. (It may or may not be relevant, but I'm storing a bunch of fields on each edge in the graph so I have a class JuNode which contains a member ArrayList<JuEdge> links;, i.e. there are 2 classes involved, rather than plain object references from one node to another. It shouldn't matter for the purposes of the question). My question is threefold: (a) why don't the implementors of Java rectify this limitation or are they already working on it? (I can't believe I'm the first person to ever want to serialise a graph in java) (b) is there a better way? Is there some drop-in alternative to the default serialisation classes that does it in a cleverer way? (c) if my best option is to get my hands dirty with low-level code, does someone have an example of graph serialisation java source-code that can use to learn how to do it?

    Read the article

  • How do I use .htaccess to redirect to a URL containing HTTP_HOST?

    - by Jon Cram
    Problem I need to redirect some short convenience URLs to longer actual URLs. The site in question uses a set of subdomains to identify a set of development or live versions. I would like the URL to which certain requests are redirected to include the HTTP_HOST such that I don't have to create a custom .htaccess file for each host. Host-specific Example (snipped from .htaccess file) Redirect /terms http://support.dev01.example.com/articles/terms/ This example works fine for the development version running at dev01.example.com. If I use the same line in the main .htaccess file for the development version running under dev02.example.com I'd end up being redirected to the wrong place. Ideal rule (not sure of the correct syntax) Redirect /terms http://support.{HTTP_HOST}/articles/terms/ This rule does not work and merely serves as an example of what I'd like to achieve. I could then use the exact same rule under many different hosts and get the correct result. Answers? Can this be done with mod_alias or does it require the more complex mod_rewrite? How can this be achieved using mod_alias or mod_rewrite? I'd prefer a mod_alias solution if possible. Clarifications I'm not staying on the same server. I'd like: http://example.com/terms/ - http://support.example.com/articles/terms/ https://secure.example.com/terms/ - http://support.example.com/articles/terms/ http://dev.example.com/terms/ - http://support.dev.example.com/articles/terms/ https://secure.dev.example.com/terms/ - http://support.dev.example.com/articles/terms/ I'd like to be able to use the same rule in the .htaccess file on both example.com and dev.example.com. In this situation I'd need to be able to refer to the HTTP_HOST as a variable rather than specifying it literally in the URL to which requests are redirected. I'll investigate the HTTP_HOST parameter as suggested but was hoping for a working example.

    Read the article

  • Should I thread this?

    - by Psytronic
    I've got a "Loading Progress" WPF form which I instantiate when I start loading some results into my Main Window, and every time a result is loaded I want the progress bar to fill up by x amount (Based on the amount of results I'm loading). However what happens is that the Progress bar in the window stays blank the entire time, until the results have finished loading, then it will just display the full progress bar. Does this need threading to work properly? Or is it just to do with the way I'm trying to get it to work? //code snippet LoadingProgress lp = new LoadingProgress(feedCount); lp.Show(); foreach (FeedConfigGroup feed in _Feeds) { feed.insertFeeds(lp); } //part of insertFeeds(LoadingProgress lbBox) foreach (Feeds fd in _FeedSource) { lpBox.setText(fd.getName); XmlDocument feedResults = new XmlDocument(); feedResults.PreserveWhitespace = false; try { feedResults.Load(wc.OpenRead(fd.getURL)); } catch (WebException) { lpBox.addError(fd.getName); } foreach (XmlNode item in feedResults.SelectNodes("/rss/channel/item")) { //code for processing the nodes... } lpBox.progressIncrease(); } If more code is needed let me know.

    Read the article

  • PHP script stops running arbitrarily with no errors.

    - by RobHardgood
    I was working on a fairly long script that seemed to stop running after about 20 minutes. After days of trying to figure out why, I decided to make a very simple script to see how long it would run without any complex code to confuse me. I found that the same thing was happening with this simple infinite loop. At some point between 15 and 25 minutes of running, it simply stopped. There is no error output at all. The bottom of the browser simply says "Done" and nothing else is processed. I've been over every single possible thing I could think of... set_time_limit, session.gc_maxlifetime in the php.ini as well as memory_limit and max_execution_time. The point that the script is stopped is never consistent. Sometimes it will stop at 15 minutes, sometimes 22 minutes, sometimes 17... But it's always long enough to be a PITA to test. Please, any help would be greatly appreciated. It is hosted on a 1and1 server. I contacted them and they couldn't help me... but I have a feeling they just didn't know enough.

    Read the article

  • confusion about using types instead of gtts in oracle

    - by Omnipresent
    I am trying to convert queries like below to types so that I won't have to use GTT: insert into my_gtt_table_1 (house, lname, fname, MI, fullname, dob) (select house, lname, fname, MI, fullname, dob from (select 'REG' house, mbr_last_name lname, mbr_first_name fname, mbr_mi MI, mbr_first_name || mbr_mi || mbr_last_name fullname, mbr_dob dob from table_1 a, table_b where a.head = b.head and mbr_number = '01' and mbr_last_name = v_last_name) c above is just a sample but complex queries are bigger than this. the above is inside a stored procedure. So to avoid the gtt (my_gtt_table_1). I did the following: create or replace type lname_row as object ( house varchar2(30) lname varchar2(30), fname varchar2(30), MI char(1), fullname VARCHAR2(63), dob DATE ) create or replace type lname_exact as table of lname_row Now in the SP: type lname_exact is table of <what_table_should_i_put_here>%rowtype; tab_a_recs lname_exact; In the above I am not sure what table to put as my query has nested subqueries. query in the SP: (I am trying this for sample purpose to see if it works) select lname_row('', '', '', '', '', '', sysdate) bulk collect into tab_a_recs from table_1; I am getting errors like : ORA-00913: too many values I am really confused and stuck with this :(

    Read the article

  • C++ Declarative Parsing Serialization

    - by Martin York
    Looking at Java and C# they manage to do some wicked processing based on special languaged based anotation (forgive me if that is the incorrect name). In C++ we have two problems with this: 1) There is no way to annotate a class with type information that is accessable at runtime. 2) Parsing the source to generate stuff is way to complex. But I was thinking that this could be done with some template meta-programming to achieve the same basic affect as anotations (still just thinking about it). Like char_traits that are specialised for the different types an xml_traits template could be used in a declaritive way. This traits class could be used to define how a class is serialised/deserialized by specializing the traits for the class you are trying to serialize. Example Thoughs: template<typename T> struct XML_traits { typedef XML_Empty Children; }; template<> struct XML_traits<Car> { typedef boost::mpl::vector<Body,Wheels,Engine> Children; }; template<typename T> std::ostream& Serialize(T const&) { // my template foo is not that strong. // but somthing like this. boost::mpl::for_each<typename XML_Traits<T>::Children,Serialize>(data); } template<> std::ostream& Serialize<XML_Empty>(T const&) { /* Do Nothing */ } My question is: Has anybody seen any projects/decumentation (not just XML) out there that uses techniques like this (template meta-programming) to emulate the concept of annotation used in languges like Java and C# that can then be used in code generation (to effectively automate the task by using a declaritive style). At this point in my research I am looking for more reading material and examples.

    Read the article

  • Using the Queue class in Python 2.6

    - by voipme
    Let's assume I'm stuck using Python 2.6, and can't upgrade (even if that would help). I've written a program that uses the Queue class. My producer is a simple directory listing. My consumer threads pull a file from the queue, and do stuff with it. If the file has already been processed, I skip it. The processed list is generated before all of the threads are started, so it isn't empty. Here's some pseudo-code. import Queue, sys, threading processed = [] def consumer(): while True: file = dirlist.get(block=True) if file in processed: print "Ignoring %s" % file else: # do stuff here dirlist.task_done() dirlist = Queue.Queue() for f in os.listdir("/some/dir"): dirlist.put(f) max_threads = 8 for i in range(max_threads): thr = Thread(target=consumer) thr.start() dirlist.join() The strange behavior I'm getting is that if a thread encounters a file that's already been processed, the thread stalls out and waits until the entire program ends. I've done a little bit of testing, and the first 7 threads (assuming 8 is the max) stop, while the 8th thread keeps processing, one file at a time. But, by doing that, I'm losing the entire reason for threading the application. Am I doing something wrong, or is this the expected behavior of the Queue/threading classes in Python 2.6?

    Read the article

  • Retrieve class name hierarchy as string

    - by Jeff Wain
    Our system complexity has risen to the point that we need to make permission names tied to the client from the database more specific. In the client, permissions are referenced from a static class since a lot of client functionality is dependent on the permissions each user has and the roles have a ton of variety. I've referenced this post as an example, but I'm looking for a more specific use case. Take for instance this reference, where PermissionAlpha would be a const string: return HasPermission(PermissionNames.PermissionAlpha); Which is great, except now that things are growing more complex the classes are being structured like this: public static class PermissionNames { public static class PermissionAlpha { public const string SubPermission; } } I'm trying to find an easy way to reference PermissionAlpha in this new setup that will act similar to the first declaration above. Would the only way to do this be to resort to pulling the value of the class name like in the example below? I'm trying to keep all the names in one place that can be reference anywhere in the application. public static class PermissionAlpha { public static string Name { get { return typeof(PermissionAlpha).Name; } } }

    Read the article

  • How to extract comment out of header file using python, perl, or sed?

    - by WilliamKF
    I have a header file like this: /* * APP 180-2 ALG-254/258/772 implementation * Last update: 03/01/2006 * Issue date: 08/22/2004 * * Copyright (C) 2006 Somebody's Name here * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the project nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE PROJECT AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE PROJECT OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #ifndef HEADER_H #define HEADER_H /* More comments and C++ code here. */ #endif /* End of file. */ And I wish to extract out the contents of the first C style comment only and drop the " *" at the start of each line to get a file with the following contents: APP 180-2 ALG-254/258/772 implementation Last update: 03/01/2006 Issue date: 08/22/2004 Copyright (C) 2006 Somebody's Name here All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the project nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE PROJECT AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE PROJECT OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Please suggest an easy way to do this with Python, Perl, sed, or some other way on Unix. Preferably as a one-liner.

    Read the article

  • Implementation of MVC with SQLite and NSURLConnection, use cases?

    - by user324723
    I'm interested in knowing how others have implemented/designed database & web services in their iphone app and how they simplified it for the entire application. My application is dependent on these services and I can't figure out a efficient way to use them together due to the (semi)complexity of my requirements. My past attempts on combining them haven't been completely successful or at least optimal in my mind. I'm building a database driven iphone app that uses a relational database in sqlite and consumes web services based on missing content or user interaction. Like this hasn't been done before...right? Since I am using a relational database - any web services consumed requires normalization, parsing the result and persisting it to the database before it can be displayed in a table view controller. The applications UI consists of nested(nav controller) table views where a user can select a cell and be taken to the next table view where it attempts to populate the table views data source from the database. If nothing exists in the database then it will send a request via web services to download its content, thus download - parse - persist - query - display. Since the user has the ability to request a refresh of this data it still requires the same process. Quickly describing what I've implemented and tried to run with - 1st attempt - Used a singleton web service class that handled sending web service requests, parsing the result and returning it to the table view controller via delegate protocols. Once the controller received that data it would then be responsible for persisting it to the database and re-returning the result. I didn't like the idea of only preventing the case where the app delegate selector doesn't exists(released) causing the app to crash. 2nd attempt - Used NSNotificationCenter for easy access to both database and web services but later realized it was more complex due to adding and removing observers per view(which isn't advised anyways).

    Read the article

  • Pump Messages During Long Operations + C#

    - by Newbie
    Hi I have a web service that is doing huge computation and is taking more than a minute. I have generated the proxy file of the web service and then from my client end I am using the dll(of course I generated the proxy dll). My client side code is TimeSeries3D t = new TimeSeries3D(); int portfolioId = 4387919; string[] str = new string[2]; str[0] = "MKT_CAP"; DateRange dr = new DateRange(); dr.mStartDate = DateTime.Today; dr.mEndDate = DateTime.Today; Service1 sc = new Service1(); t = sc.GetAttributesForPortfolio(portfolioId, true, str, dr); But since it is taking to much time for the server to compute, after 1 minute I am receiving an error message The CLR has been unable to transition from COM context 0x33caf30 to COM context 0x33cb0a0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. Kindly guide me what to do? Thanks

    Read the article

  • Integration transport choice (Oracle + SQL Server)

    - by lak-b
    We have several systems with Oracle (A) and SQL Server (B) databases on backend. I have to consolidate data from those systems into the new SQL Server database. Something like that: (A) =>|---------------| | some software | => SQL Server (B) =>|---------------| where some software is: transport (A and B systems located in the network) processing business logic (custom .NET code) Due to first point, I need some queue software or something similar (like MSMQ, Service Broker or something). In another hand, I can implement a web-service instead of queue. (A) =>|---------------|-------------| | queue/service | custom code | => SQL Server (B) =>|---------------|-------------| The question is: which queue/transport framework should I use with Oracle and SQL Server databases? It would be nice, if I can post messages to MSMQ in both Oracle and SQL Server stored procedures (can I?) It would be nice, if I can call a web-service in both Oracle and SQL Server stored procedures (can I?) It would be nice, if I can use something similar in both Oracle and SQL Server stored procedures (what exactly?) What software should I prefer to my requirements?

    Read the article

  • Replace without the replace function

    - by Molly Potter
    Assignment: Let X and Y be two words. Find/Replace is a common word processing operation that finds each occurrence of word X and replaces it with word Y in a given document. Your task is to write a program that performs the Find/Replace operation. Your program will prompt the user for the word to be replaced (X), then the substitute word (Y ). Assume that the input document is named input.txt. You must write the result of this Find/Replace operation to a file named output.txt. Lastly, you cannot use the replace() string function built into Python (it would make the assignment much too easy). To test your code, you should modify input.txt using a text editor such as Notepad or IDLE to contain different lines of text. Again, the output of your code must look exactly like the sample output. This is my code: input_data = open('input.txt','r') #this opens the file to read it. output_data = open('output.txt','w') #this opens a file to write to. userStr= (raw_input('Enter the word to be replaced:')) #this prompts the user for a word userReplace =(raw_input('What should I replace all occurences of ' + userStr + ' with?')) #this prompts the user for the replacement word for line in input_data: words = line.split() if userStr in words: output_data.write(line + userReplace) else: output_data.write(line) print 'All occurences of '+userStr+' in input.txt have been replaced by '+userReplace+' in output.txt' #this tells the user that we have replaced the words they gave us input_data.close() #this closes the documents we opened before output_data.close() It won't replace anything in the output file. Help!

    Read the article

  • How to handle 'this' pointer in constructor?

    - by Kyle
    I have objects which create other child objects within their constructors, passing 'this' so the child can save a pointer back to its parent. I use boost::shared_ptr extensively in my programming as a safer alternative to std::auto_ptr or raw pointers. So the child would have code such as shared_ptr<Parent>, and boost provides the shared_from_this() method which the parent can give to the child. My problem is that shared_from_this() cannot be used in a constructor, which isn't really a crime because 'this' should not be used in a constructor anyways unless you know what you're doing and don't mind the limitations. Google's C++ Style Guide states that constructors should merely set member variables to their initial values. Any complex initialization should go in an explicit Init() method. This solves the 'this-in-constructor' problem as well as a few others as well. What bothers me is that people using your code now must remember to call Init() every time they construct one of your objects. The only way I can think of to enforce this is by having an assertion that Init() has already been called at the top of every member function, but this is tedious to write and cumbersome to execute. Are there any idioms out there that solve this problem at any step along the way?

    Read the article

  • Flex profiling - what is [enterFrameEvent] doing?

    - by Herms
    I've been tasked with finding (and potentially fixing) some serious performance problems with a Flex application that was delivered to us. The application will consistently take up 50 to 100% of the CPU at times when it is simply idling and shouldn't be doing anything. My first step was to run the profiler that comes with FlexBuilder. I expected to find some method that was taking up most of the time, showing me where the bottleneck was. However, I got something unexpected. The top 4 methods were: [enterFrameEvent] - 84% cumulative, 32% self time [reap] - 20% cumulative and self time [tincan] - 8% cumulative and self time global.isNaN - 4% cumulative and self time All other methods had less than 1% for both cumulative and self time. From what I've found online, the [bracketed methods] are what the profiler lists when it doesn't have an actual Flex method to show. I saw someone claim that [tincan] is the processing of RTMP requests, and I assume [reap] is the garbage collector. Does anyone know what [enterFrameEvent] is actually doing? I assume it's essentially the "main" function for the event loop, so the high cumulative time is expected. But why is the self time so high? What's actually going on? I didn't expect the player internals to be taking up so much time, especially since nothing is actually happening in the app (and there are no UI updates going on). Is there any good way to find dig into what's happening? I know something is going on that shouldn't be (it looks like there must be some kind of busy wait or other runaway loop), but the profiler isn't giving me any results that I was expecting. My next step is going to be to start adding debug trace statements in various places to try and track down what's actually happening, but I feel like there has to be a better way.

    Read the article

  • Modifying SQL Database on Shared Hosting

    - by apocalypse9
    I have a live database on a shared hosting server. I am making some major changes to my site's code and I would like to fix some stupid mistakes I made in initially designing the database. These changes involve altering the size of a large number of fields, and enforcing referential integrity between tables properly. I would like to make the changes on both my local test server and the remote server if possible. I should note that while I'm fairly comfortable with writing complex queries to handle data, I have very little experience modifying database structure without a graphical interface. I can access the remote database in the visual studio database explorer but I can not use that for anything other than data manipulation. I installed Sql Management Studio express last night and after 40+ crashes I gave up - I couldn't even patch the damn thing. The remote server is SQL 2005 / The MyLittleAdmin web interface is available. So my question is what is the best way to accomplish these changes. Is there a graphical interface I can use on the remote server? If not is there an easy way to copy the database to my local machine, fix it, and re upload? Finally if none of the above are viable does anyone have links to a decent info on fixing referential integrity via query? Sorry for the somewhat general question - I feel like I am making this far harder than it should be but after searching / trying all night i haven't gotten anywhere. Thanks in advance for the help. I really appreciate it. ...Also does anyone have a time machine I can borrow- I need to go kick my past self's ass for this.

    Read the article

  • Implementing a bitfield using java enums

    - by soappatrol
    Hello, I maintain a large document archive and I often use bit fields to record the status of my documents during processing or when validating them. My legacy code simply uses static int constants such as: static int DOCUMENT_STATUS_NO_STATE = 0 static int DOCUMENT_STATUS_OK = 1 static int DOCUMENT_STATUS_NO_TIF_FILE = 2 static int DOCUMENT_STATUS_NO_PDF_FILE = 4 This makes it pretty easy to indicate the state a document is in, by setting the appropriate flags. For example: status = DOCUMENT_STATUS_NO_TIF_FILE | DOCUMENT_STATUS_NO_PDF_FILE; Since the approach of using static constants is bad practice and because I would like to improve the code, I was looking to use Enums to achieve the same. There are a few requirements, one of them being the need to save the status into a database as a numeric type. So there is a need to transform the enumeration constants to a numeric value. Below is my first approach and I wonder if this is the correct way to go about this? class DocumentStatus{ public enum StatusFlag { DOCUMENT_STATUS_NOT_DEFINED(1<<0), DOCUMENT_STATUS_OK(1<<1), DOCUMENT_STATUS_MISSING_TID_DIR(1<<2), DOCUMENT_STATUS_MISSING_TIF_FILE(1<<3), DOCUMENT_STATUS_MISSING_PDF_FILE(1<<4), DOCUMENT_STATUS_MISSING_OCR_FILE(1<<5), DOCUMENT_STATUS_PAGE_COUNT_TIF(1<<6), DOCUMENT_STATUS_PAGE_COUNT_PDF(1<<7), DOCUMENT_STATUS_UNAVAILABLE(1<<8), private final long statusFlagValue; StatusFlag(long statusFlagValue) { this.statusFlagValue = statusFlagValue } public long getStatusFlagValue(){ return statusFlagValue } } /** * Translates a numeric status code into a Set of StatusFlag enums * @param numeric statusValue * @return EnumSet representing a documents status */ public EnumSet<StatusFlag> getStatusFlags(long statusValue) { EnumSet statusFlags = EnumSet.noneOf(StatusFlag.class) StatusFlag.each { statusFlag -> long flagValue = statusFlag.statusFlagValue if ( (flagValue&statusValue ) == flagValue ) { statusFlags.add(statusFlag) } } return statusFlags } /** * Translates a set of StatusFlag enums into a numeric status code * @param Set if statusFlags * @return numeric representation of the document status */ public long getStatusValue(Set<StatusFlag> flags) { long value=0 flags.each { statusFlag -> value|=statusFlag.getStatusFlagValue() } return value } public static void main(String[] args) { DocumentStatus ds = new DocumentStatus(); Set statusFlags = EnumSet.of( StatusFlag.DOCUMENT_STATUS_OK, StatusFlag.DOCUMENT_STATUS_UNAVAILABLE) assert ds.getStatusValue( statusFlags )==258 // 0000.0001|0000.0010 long numericStatusCode = 56 statusFlags = ds.getStatusFlags(numericStatusCode) assert !statusFlags.contains(StatusFlag.DOCUMENT_STATUS_OK) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_TIF_FILE) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_PDF_FILE) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_OCR_FILE) } }

    Read the article

  • SQL Server error handling: exceptions and the database-client contract

    - by gbn
    We’re a team of SQL Servers database developers. Our clients are a mixed bag of C#/ASP.NET, C# and Java web services, Java/Unix services and some Excel. Our client developers only use stored procedures that we provide and we expect that (where sensible, of course) they treat them like web service methods. Some our client developers don’t like SQL exceptions. They understand them in their languages but they don’t appreciate that the SQL is limited in how we can communicate issues. I don’t just mean SQL errors, such as trying to insert “bob” into a int column. I also mean exceptions such as telling them that a reference value is wrong, or that data has already changed, or they can’t do this because his aggregate is not zero. They’d don’t really have any concrete alternatives: they’ve mentioned that we should output parameters, but we assume an exception means “processing stopped/rolled back. How do folks here handle the database-client contract? Either generally or where there is separation between the DB and client code monkeys. Edits: we use SQL Server 2005 TRY/CATCH exclusively we log all errors after the rollback to an exception table already we're concerned that some of our clients won't check output paramaters and assume everything is OK. We need errors flagged up for support to look at. everything is an exception... the clients are expected to do some message parsing to separate information vs errors. To separate our exceptions from DB engine and calling errors, they should use the error number (ours are all 50,000 of course)

    Read the article

  • How best to deal with warning c4305 when type could change?

    - by identitycrisisuk
    I'm using both Ogre and NxOgre, which both have a Real typedef that is either float or double depending on a compiler flag. This has resulted in most of our compiler warnings now being: warning C4305: 'argument' : truncation from 'double' to 'Ogre::Real' When initialising variables with 0.1 for example. Normally I would use 0.1f but then if you change the compiler flag to double precision then you would get the reverse warning. I guess it's probably best to pick one and stick with it but I'd like to write these in a way that would work for either configuration if possible. One fix would be to use #pragma warning (disable : 4305) in files where it occurs, I don't know if there are any other more complex problems that can be hidden by not having this warning. I understand I would push and pop these in header files too so that they don't end up spreading across code. Another is to create some macro based on the accuracy compiler flag like: #if OGRE_DOUBLE_PRECISION #define INIT_REAL(x) (x) #else #define INIT_REAL(x) static_cast<float>( x ) #endif which would require changing all the variable initialisation done so far but at least it would be future proof. Any preferences or something I haven't thought of?

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • How to Treat Race Condition of Session in Web Application?

    - by Morgan Cheng
    I was in a ASP.NET application has heavy traffic of AJAX requests. Once a user login our web application, a session is created to store information of this user's state. Currently, our solution to keep session data consistent is quite simple and brutal: each request needs to acquire a exclusive lock before being processed. This works fine for tradition web application. But, when the web application turns to support AJAX, it turns to not efficient. It is quite possible that multiple AJAX requests are sent to server at the same time without reloading the web page. If all AJAX requests are serialized by the exclusive lock, the response is not so quick. Anyway, many AJAX requests that doesn't access same session variables are blocked as well. If we don't have a exclusive lock for each requests, then we need to treat all race condition carefully to avoid dead lock. I'm afraid that would make the code complex and buggy. So, is there any best practice to keep session data consistent and keep code simple and clean?

    Read the article

  • Rails paginate existing array of ActiveRecord results

    - by SaoiseK
    Hello, I generally use will_paginate for the pagination in my app, but have hit a stumbler on my search feature. I'm using Thinking Sphinx for doing my full-text search, which returns results paginated. The problem I'm having is that after I've received the results from Thinking Sphinx, I need to merge them with some other results and re-order them. Once I've finished processing them I have an Array of results that is very different from the original from TS. As there could be 1000+ results in this Array Pagination is a necessity. The problem is that I can't figure out how to get will_paginate to play with an existing array. I've done some research and it seems the only solutions to this problem are from several years ago and are based around the old built-in Paginator class. The most recent one I could find that makes use of will_paginate was from devchix from mid-2007: http://www.devchix.com/2007/07/23/will_paginate-array/comment-page-1/ - I've given this a go but it doesn't seem to do anything for me. Are there any current methods for applying pagination (preferably via will_paginate) for existing arrays of AR results?

    Read the article

< Previous Page | 758 759 760 761 762 763 764 765 766 767 768 769  | Next Page >