Search Results

Search found 8429 results on 338 pages for 'batch processing'.

Page 289/338 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • ASP.Net ListBox selections not working in Panel?

    - by larryq
    Hi everyone, I'm having trouble processing a listbox after selecting some items from it. In my markup, the listbox is contained within an asp:panel and is populated during page load in the codebehind. That part works fine. It's when I select various items and submit that I have trouble. My handler loops through the listbox items but doesn't see any as being selected. I'm not sure why. Here's the markup: <asp:Panel ID="panEdit" runat="server" Height="180px" Width="400px" CssClass="ModalWindow"> <table width="100%"> <asp:label runat = "server">Choose your items</asp:label> <tr> <td> <asp:ListBox ID="lstFundList" runat="server" SelectionMode="Multiple" OnLoad="lstFundList_LoadData"> </asp:ListBox> </td> </tr> </table> <asp:Button ID="btnUpdate" runat="server" Text="Update" OnClick="btnUpdate_OnClick"/> <asp:Button ID="btnCancel" runat="server" Text="Cancel" OnClientClick="$find('ModalPopupExtender1').hide(); return false;" /> </asp:Panel> In my btnUpdate_OnClick handler I can't see any listbox items that are marked as selected. I assume something strange is going on with respect to postback and the panel?

    Read the article

  • Updating a local sqlite db that is used for local metadata & caching from an service?

    - by Pharaun
    I've searched through the site and haven't found a question/answer that quite answer my question, the closest one I found was: Syncing objects between two disparate systems best approach. Anyway to begun, because there is no RSS feeds available, I'm screen scrapping a webpage, hence it does a fetch then it goes through the webpage to scrap out all of the information that I'm interested in and dumps that information into a sqlite database so that I can query the information at my leisure without doing repeat fetching from the website. However I'm also storing various metadata on the data itself that is stored in the sqlite db, such as: have I looked at the data, is the data new/old, bookmark to a chunk of data (Think of it as a collection of unrelated data, and the bookmark is just a pointer to where I am in processing/reading of the said data). So right now my current problem is trying to figure out how to update the local sqlite database with new data and/or changed data from the website in a manner that is effective and straightforward. Here's my current idea: Download the page itself Create a temporary table for the parsed data to go into Do a comparison between the official and the temporary table and copy updates and/or new information to the official table This process seems kind of complicated because I would have to figure out how to determine if the data in the temporary table is new, updated, or unchanged. So I am wondering if there isn't a better approach or if anyone has any suggestion on how to architecture/structure such system?

    Read the article

  • How to cache pages using background jobs ?

    - by Alexandre
    Definitions: resource = collection of database records, regeneration = processing these records and outputting the corresponding html Current flow: Receive client request Check for resource in cache If not in cache or cache expired, regenerate Return result The problem is that the regeneration step can tie up a single server process for 10-15 seconds. If a couple of users request the same resource, that could result in a couple of processes regenerating the exact same resource simultaneously, each taking up 10-15 seconds. Wouldn't it be preferrable to have the frontend signal some background process saying "Hey, regenerate this resource for me". But then what would it display to the user? "Rebuilding" is not acceptable. All resources would have to be in cache ahead of time. This could be a problem as the database would almost be duplicated on the filesystem (too big to fit in memory). Is there a way to avoid this? Not ideal, but it seems like the only way out. But then there's one more problem. How to keep the same two processes from requesting the regeneration of a resource at the same time? The background process could be regenerating the resource when a frontend asks for the regeneration of the same resource. I'm using PHP and the Zend Framework just in case someone wants to offer a platform-specific solution. Not that it matters though - I think this problem applies to any language/framework. Thanks!

    Read the article

  • Detecting Connection Speed / Bandwidth in .net/WCF

    - by Mystagogue
    I'm writing both client and server code using WCF, where I need to know the "perceived" bandwidth of traffic between the client and server. I could use ping statistics to gather this information separately, but I wonder if there is a way to configure the channel stack in WCF so that the same statistics can be gathered simultaneously while performing my web service invocations. This would be particularly useful in cases where ICMP is disabled (e.g. ping won't work). In short, while making my regular business-related web service calls (REST calls to be precise), is there a way to collect connection speed data implicitly? Certainly I could time the web service round trip, compared to the size of data used in the round-trip, to give me an idea of throughput - but I won't know how much of that perceived bandwidth was network related, or simply due to server-processing latency. I could perhaps solve that by having the server send back a time delta, representing server latency, so that the client can compute the actual network traffic time. If a more sophisticated approach is not available, that might be my answer...

    Read the article

  • Gracefully handling screen orientation change during activity start

    - by Steve H
    I'm trying to find a way to properly handle setting up an activity where its orientation is determined from data in the intent that launched it. This is for a game where the user can choose levels, some of which are int portrait orientation and some are landscape orientation. The problem I'm facing is that setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE) doesn't take effect until the activity is fully loaded. This is a problem for me because I do some loading and image processing during startup, which I'd like to only have to do once. Currently, if the user chose a landscape level: the activity starts onCreate(), defaulting to portrait discovers from analysing its launching Intent that it should be in landscape orientation continues regardless all the way to onResume(), loading information and performing other setup tasks at this point setRequestedOrientation kicks in so the application runs through onPause() to onDestroy() it then again starts up from onCreate() and runs to onResume() repeating the setup from earlier Is there a way to avoid that and have it not perform the loading twice? For example, ideally, the activity would know before even onCreate was called whether it should be landscape or portrait depending on some property of the launching intent, but unless I've missed something that isn't possible. I've managed to hack together a way to avoid repeating the loading by checking a boolean before the time-consuming loading steps, but that doesn't seem like the right way of doing it. I imagine I could override onSaveInstanceState, but that would require a lot of additional coding. Is there a simple way to do this? Thanks!

    Read the article

  • Passing array values in an HTTP request in .NET

    - by Zarjay
    What's the standard way of passing and processing an array in an HTTP request in .NET? I have a solution, but I don't know if it's the best approach. Here's my solution: <form action="myhandler.ashx" method="post"> <input type="checkbox" name="user" value="Aaron" /> <input type="checkbox" name="user" value="Bobby" /> <input type="checkbox" name="user" value="Jimmy" /> <input type="checkbox" name="user" value="Kelly" /> <input type="checkbox" name="user" value="Simon" /> <input type="checkbox" name="user" value="TJ" /> <input type="submit" value="Submit" /> </form> The ASHX handler receives the "user" parameter as a comma-delimited string. You can get the values easily by splitting the string: public void ProcessRequest(HttpContext context) { string[] users = context.Request.Form["user"].Split(','); } So, I already have an answer to my problem: assign multiple values to the same parameter name, assume the ASHX handler receives it as a comma-delimited string, and split the string. My question is whether or not this is how it's typically done in .NET. What's the standard practice for this? Is there a simpler way to grab the multiple values than assuming that the value is comma-delimited and calling Split() on it? Is this how arrays are typically passed in .NET, or is XML used instead? Does anyone have any insight on whether or not this is the best approach?

    Read the article

  • I can't figure out why this fps counter is inaccurate.

    - by rmetzger
    I'm trying to track frames per second in my game. I don't want the fps to show as an average. I want to see how the frame rate is affected when I push keys and add models etc. So I am using a variable to store the current time and previous time, and when they differ by 1 second, then I update the fps. My problem is that it is showing around 33fps but when I move the mouse around really fast, the fps jumps up to 49fps. Other times, if I change a simple line of code elsewhere not related to the frame counter, or close the project and open it later, the fps will be around 60. Vsync is on so I can't tell if the mouse is still effecting the fps. Here is my code which is in an update function that happens every frame: FrameCount++; currentTime = timeGetTime (); static unsigned long prevTime = currentTime; TimeDelta = (currentTime - prevTime) / 1000; if (TimeDelta > 1.0f) { fps = FrameCount / TimeDelta; prevTime = currentTime; FrameCount = 0; TimeDelta = 0; } Here are the variable declarations: int FrameCount; double fps, currentTime, prevTime, TimeDelta, TimeElapsed; Please let me know what is wrong here and how to fix it, or if you have a better way to count fps. Thanks!!!!!! I am using DirectX 9 btw but I doubt that is relevant, and I am using PeekMessage. Should I be using an if else statement instead? Here is my message processing loop: MSG msg; ZeroMemory (&msg, sizeof (MSG)); while (msg.message != WM_QUIT) { if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { TranslateMessage (&msg); DispatchMessage (&msg); } Update (); RenderFrame (); }

    Read the article

  • Upload using python script takes very long on one laptop as compared to another

    - by Engr Am
    I have a python 2.7 code which uses STORBINARY function for uploading files to an ftp server and RETRBINARY for downloading from this server. However, the issue is the upload is taking a very long time on three laptops from different brands as compared to a Dell laptop. The strange part is when I manually upload any file, it takes the same time on all the systems. The manual upload rate and upload rate with the python script is the same on the Dell Laptop. However, on every other brand of laptop (I have tried with IBM, Toshiba, Fujitsu-Siemens) the python script has a very low upload rate than the manual attempt. Also, on all these other laptops, the upload rate using the python script is the same (1Mbit/s) while the manual upload rate is approx. 8 Mbit/s. I have tried to vary the filesize for the upload to no avail. TCP Optimizer improved the download rate on all the systems but had no effect on the upload rate. Download rate using this script on all the systems is fine and same as the manual download rate. I have checked the server and it has more than 90% free space. The network connection is the same for all the laptops, and I try uploading only with one laptop at a time. All the laptops have almost the same system configurations, same operating system and approximately the same free drive space. If anything the Dell laptop is a little less in terms of processing power and RAM than 2 of the others, but I suppose this has no effect as I have checked many times to see how much was the CPU usage and network usage during these uploads and downloads, and I am sure that no other virus or program has been eating up my bandwidth. Here is the code ('ftp' and 'file_path' are inputs to the function): path,filename=os.path.split(file_path) filesize=os.path.getsize(file_path) deffilesize=(filesize/1024)/1024 f = open(file_path, "rb") upstart = time.clock() print ftp.storbinary("STOR "+filename, f) upende = time.clock()-upstart outname="Upload " f.close() return upende, deffilesize, outname

    Read the article

  • SocialAuth.NET is not working

    - by Alen Joy
    I'm trying to use SocialAuth.NET for extract contacts from gmail and yahoo for my web application but when I run the WebformDemo the following error occurs Server Error in '/Demo' Application. Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Unrecognized attribute 'targetFramework'. Note that attribute names are case-sensitive. Source Error: Line 76: </authentication>--> Line 77: <!--<authentication mode="None"/>--> Line 78: <compilation debug="true" targetFramework="4.0"> Line 79: <assemblies> Line 80: <add assembly="Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> Source File: D:\test\SocialAuth-net-2.3\WebFormsDemo\web.config Line: 78 Version Information: Microsoft .NET Framework Version:2.0.50727.3053; ASP.NET Version:2.0.50727.3053 I'm using Windows XP and Visual Studio 2010. Any help?

    Read the article

  • Which tools to use and how to find file descriptors leaking from Glassfish?

    - by cclark
    We release new code to production every week and Glassfish hasn't had any problems. This weekend we had to move racks at our hosting provider. There were not any code changes (they just powered off, moved, re-racked and powered on) but we're on a new network infrastructure and suddenly we're leaking file descriptors like a sieve. So I'm guessing there is some sort of connection attempting to be made which now fails due to a network change. I'm running Glassfish v2ur2-b04/AS9.1_02 on RHEL4 with an embedded IMQ instance. After the move I started seeing: [#|2010-04-25T05:34:02.783+0000|SEVERE|sun-appserver9.1|javax.enterprise.system.container.web|_ThreadID=33;_ThreadName=SelectorThread-?4848;_RequestID=c4de6f6d-c1d6-416d-ac6e-49750b1a36ff;|WEB0756: Caught exception during HTTP processing. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ... [#|2010-04-25T05:34:03.327+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=34;_ThreadName=Timer-1;_RequestID=d27e1b94-d359-4d90-a6e3-c7ec49a0f383;|java.lang.NullPointerException at com.sun.jbi.management.system.AutoAdminTask.pollAutoDirectory(AutoAdminTask.java:1031) Using lsof I check the number of file descriptors and I see quite a few entries which look like: java 18510 root 8556u sock 0,4 1555182 can't identify protocol java 18510 root 8557u sock 0,4 1555320 can't identify protocol java 18510 root 8558u sock 0,4 1555736 can't identify protocol java 18510 root 8559u sock 0,4 1555883 can't identify protocol If I do a count of open file descriptors every minute I see it growing by 12 every minute. I have no idea what these sockets are. I've undeployed my application so there is only a plain Glassfish instance running and I still see it leaking 12 file descriptors a minute. So I think this leak is in Glassfish or potentially IMQ. What approach should I take to tracking down these sockets of unknown protocol? What tools can I use (or flags can I pass to lsof) to get more information about where to look? thanks, chuck

    Read the article

  • simplify a preload image function in jQuery

    - by robertdd
    after i spent 2 day searching a preload function in jQueryi with no succes, i manage to do this: after i upload the image on server, in a list i get this: <li id="upimagesDYGONI"> <div class="percentage">100%</div> <div class="uploadifyProgress"> <div align=" center" class="uploadifyProgressBar" id="upimagesDYGONIProgressBar" style="width: 100%;"> <!--Progress Bar--> </div> </div> </li> with this function, i preload the image: $.getlastimage = function(id) { $.getJSON('operations.php', {'operation':'getli', 'id':id,}, function(lastimg){ $("#upimages" + id + " .percentage").text('processing'); $("#upimages" + id).append('<a href=""><img id="' + id + '" src="" alt="" /></a>') .parent().attr({"href":"uploads/"+ lastimg +'?'+ (new Date()).getTime()}); $("#"+id).hide().attr({"src":"uploads/"+ lastimg +'?'+ (new Date()).getTime(), "alt":lastimg}) .load(function() { $(this).show(); $("#upimages" + id + " .percentage").remove(); $("#upimages" + id + " .uploadifyProgress").remove(); }); }); } })(jQuery) and i get this: <li id="upimagesLRBHYN" style="" class=""> <a href="uploads/0002.jpg?1271901177027"> <img alt="0002.jpg" src="uploads/0002.jpg?1271901177028" id="LRBHYN" style="display: block;"> </a> </li> how i can simplify the function? i want to use the full power of jQuery!! any idea?

    Read the article

  • Some Async Socket Code - Help with Garbage Collection?

    - by divinci
    Hi all, I think this question is really about my understanding of Garbage collection and variable references. But I will go ahead and throw out some code for you to look at. // Please note do not use this code for async sockets, just to highlight my question // SocketTransport // This is a simple wrapper class that is used as the 'state' object // when performing Async Socket Reads/Writes public class SocketTransport { public Socket Socket; public byte[] Buffer; public SocketTransport(Socket socket, byte[] buffer) { this.Socket = socket; this.Buffer = buffer; } } // Entry point - creates a SocketTransport, then passes it as the state // object when Asyncly reading from the socket. public void ReadOne(Socket socket) { SocketTransport socketTransport_One = new SocketTransport(socket, new byte[10]); socketTransport_One.Socket.BeginRecieve ( socketTransport_One.Buffer, // Buffer to store data 0, // Buffer offset 10, // Read Length SocketFlags.None // SocketFlags new AsyncCallback(OnReadOne), // Callback when BeginRead completes socketTransport_One // 'state' object to pass to Callback. ); } public void OnReadOne(IAsyncResult ar) { SocketTransport socketTransport_One = ar.asyncState as SocketTransport; ProcessReadOneBuffer(socketTransport_One.Buffer); // Do processing // New Read // Create another! SocketTransport (what happens to first one?) SocketTransport socketTransport_Two = new SocketTransport(socket, new byte[10]); socketTransport_Two.Socket.BeginRecieve ( socketTransport_One.Buffer, 0, 10, SocketFlags.None new AsyncCallback(OnReadTwo), socketTransport_Two ); } public void OnReadTwo(IAsyncResult ar) { SocketTransport socketTransport_Two = ar.asyncState as SocketTransport; .............. So my question is: The first SocketTransport to be created (socketTransport_One) has a strong reference to a Socket object (lets call is ~SocketA~). Once the async read is completed, a new SocketTransport object is created (socketTransport_Two) also with a strong reference to ~SocketA~. Q1. Will socketTransport_One be collected by the garbage collector when method OnReadOne exits? Even though it still contains a strong reference to ~SocketA~ Thanks all!

    Read the article

  • Programmatically controlled virtual drive

    - by Robert Lin
    How would I go about creating a virtual drive with which I can programmatically and dynamically change the contents? For instance, program A starts running and creates a virtual drive. When program B looks in the drive, it sees an error log and starts reading/processing it. In the middle of all this program A gets a signal from somewhere and decides to add to the log. I want program B to be unaware of the change and just keep on going. Program B should continue reading as if nothing happened. Program A would just report a rediculously large file size for the log and then fill it in as appropriate. Program A would fill the log with tags if program B tries to read past the last entry. I know this is a weird request but there's really no other way to do this... I basically can't rewrite program B so I need to fool it. How do I do this in windows? How about OSX?

    Read the article

  • Why execution of a portion of code loaded from external file is not halted by the OS?

    - by menjaraz
    I've harnessed a project released on internet a long time ago. Here comes the details, all irrelevant things being stripped off for sake of concision and clarity. A binary file whose content is descibed below HEX DUMP: 55 89 E5 83 EC 08 C7 45 FC 00 00 00 00 8B 45 FC 3B 45 10 72 02 EB 19 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 08 8A 00 88 02 8D 45 FC FF 00 EB DD C6 45 FA 00 83 7D 10 01 76 6C 80 7D FA 00 74 02 EB 64 C6 45 FA 01 C7 45 FC 00 00 00 00 8B 45 10 48 39 45 FC 72 02 EB E2 8B 45 FC 8B 4D 0C 01 C1 8B 45 FC 03 45 0C 8D 50 01 8A 01 3A 02 73 30 8B 45 FC 03 45 0C 8A 00 88 45 FB 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 0C 40 8A 00 88 02 8B 45 FC 03 45 0C 8D 50 01 8A 45 FB 88 02 C6 45 FA 00 8D 45 FC FF 00 EB A7 C9 C2 0C 00 90 90 90 90 90 90 is loaded into memory and executed using the following method snippet var MySrcArray, MyDestArray: array [1 .. 15] of Byte; // ... MyBuffer: Pointer; TheProc: procedure; SortIt: procedure(ASrc, ADest: Pointer; ASize: LongWord); stdcall; begin // Initialization of MySrcArray with random Bytes and display here ... // Instructions of loading of the binary file into MyBuffer using merely **GetMem** here ... @SortIt := MyBuffer; try SortIt(@MySrcArray, @MyDestArray, 15); // Display of MyDestArray (The outcome of the processing !) except // Invalid code error handling end; // Cleaning code here ... end; works like a charm on my box. My Question: How comes it works without using VirtualAlloc and/or VirtualProtect?

    Read the article

  • CentOS: make python 2.6 see django

    - by NP
    In a harrowing attempt to get mod_wsgi to run on CentOS 5.4, I've added python 2.6 as an optional library following the instructions here. The configuration seems fine except that when trying to ping the server the Apache log prints this error: mod_wsgi (pid=20033, process='otalo', application='127.0.0.1|'): Loading WSGI script '...django.wsgi'. [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] mod_wsgi (pid=20033): Target WSGI script '...django.wsgi' cannot be loaded as Python module. [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] mod_wsgi (pid=20033): Exception occurred processing WSGI script '...django.wsgi'. [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] Traceback (most recent call last): [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] File "...django.wsgi", line 8, in [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] import django.core.handlers.wsgi [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] ImportError: No module named django.core.handlers.wsgi when I go to my python2.6 install's command line and try 'import django', the module is not found (ImportError). However, my default python 2.4 installation (still working fine) is able to import successfully. How do I point python 2.6 to django? Thanks in advance.

    Read the article

  • Python: Catching / blocking SIGINT during system call

    - by danben
    I've written a web crawler that I'd like to be able to stop via the keyboard. I don't want the program to die when I interrupt it; it needs to flush its data to disk first. I also don't want to catch KeyboardInterruptedException, because the persistent data could be in an inconsistent state. My current solution is to define a signal handler that catches SIGINT and sets a flag; each iteration of the main loop checks this flag before processing the next url. However, I've found that if the system happens to be executing socket.recv() when I send the interrupt, I get this: ^C Interrupted; stopping... // indicates my interrupt handler ran Traceback (most recent call last): File "crawler_test.py", line 154, in <module> main() ... File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 397, in readline data = recv(1) socket.error: [Errno 4] Interrupted system call and the process exits completely. Why does this happen? Is there a way I can prevent the interrupt from affecting the system call?

    Read the article

  • Search for a date between given ranges - Lotus

    - by Kris.Mitchell
    I have been trying to work out what is the best way to search for gather all of the documents in a database that have a certain date. Originally I was trying to use FTsearch or search to move through a document collection, but I changed over to processing a view and associated documents. My first question is what is the easiest way to spin through a set of documents and find if a date stored in the documents is greater than or less than a specified date? So, to continue working I implemented the following code. If (doc.creationDate(0) > cdat(parm1)) And (doc.creationDate(0) < CDat(parm2)) then ... end if but the results are off Included! Date:3/12/10 11:07:08 P1:3/1/10 P2: 3/5/10 Included! Date:3/13/10 9:15:09 P1:3/1/10 P2: 3/5/10 Included! Date:3/17/10 16:22:07P1:3/1/10 P2: 3/5/10 You can see that the date stored in the doc is not between P1 and P2. BUT! it does limit the documents with a date less than P1 correctly. So I won't get a result for a document with a date less than 3/1/10 If there isn't a better way than the if statement, can someone help me understand why the two examples from above are included?

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

  • What MS technology to use for HTTP service returning XML?

    - by Borek
    I need to create a service that: accepts HTTP requests (with query string or HTTP POST parameters) does some processing on the requests (checking if the request is valid, authentication etc.) reads data from a custom store (another HTTP call in our case) returns the result as custom XML (defined with XSD) I'm trying to think of various MS technologies that could help me and how good they would be for this scenario (pretty standard one I guess). The tasks above are relatively separate, this is what comes to mind: HTTP front-end: ASP.NET Web Forms ASP.NET MVC (seems more appropriate here as I won't need server controls, view state etc.) WCF? Don't know much about it or how well it would suit my task. Custom logic on the server: this will probably be a generic C# code in all cases (sometimes "plugged into" or called from MVC controllers or some equivalent place in other technologies) Reading data from internal data stores: As said, this is another HTTP server in our case. Options that come to mind: Just read the data using something like WebClient (Just theoretically) implement a LINQ provider (Just even more theoretically) implement an EF provider Output the data as custom XML: Linq2XML Serialization? Is it flexible enough? Does WCF provide some tools for this? Some "OXM" - Object/XML mapper if there is something like that for .NET I may be wrong in many of my assumptions, this is just a quick list that comes to mind after a quick research. Some general notes / questions: Testing is important Solution with a clear domain model would be much preferred over the one without Can Entity Framework actually help somewhere in my scenario? If so, where and how? Would WCF be an appropriate technology for this? I don't know much about it.

    Read the article

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

  • Yet another Haskell vs. Scala question

    - by Travis Brown
    I've been using Haskell for several months, and I love it—it's gradually become my tool of choice for everything from one-off file renaming scripts to larger XML processing programs. I'm definitely still a beginner, but I'm starting to feel comfortable with the language and the basics of the theory behind it. I'm a lowly graduate student in the humanities, so I'm not under a lot of institutional or administrative pressure to use specific tools for my work. It would be convenient for me in many ways, however, to switch to Scala (or Clojure). Most of the NLP and machine learning libraries that I work with on a daily basis (and that I've written in the past) are Java-based, and the primary project I'm working for uses a Java application server. I've been mostly disappointed by my initial interactions with Scala. Many aspects of the syntax (partial application, for example) still feel clunky to me compared to Haskell, and I miss libraries like Parsec and HXT and QuickCheck. I'm familiar with the advantages of the JVM platform, so practical questions like this one don't really help me. What I'm looking for is a motivational argument for moving to Scala. What does it do (that Haskell doesn't) that's really cool? What makes it fun or challenging or life-changing? Why should I get excited about writing it?

    Read the article

  • How to specify a parameter as part of every web service call?

    - by LES2
    Currently, each web service for our application has a user parameter that is added for every method. For example: @WebService public interface FooWebService { @WebMethod public Foo getFoo(@WebParam(name="alwaysHere",header=true,partName="alwaysHere") String user, @WebParam(name="fooId") Long fooId); @WebMethod public Result deletetFoo(@WebParam(name="alwaysHere",header=true,partName="alwaysHere") String user, @WebParam(name="fooId") Long fooId); // ... } There could be twenty methods in a service, each with the first parameter as user. And there could be twenty web services. We don't actually use the 'user' argument in the implementations - in fact, I don't know why it's there - but I wasn't involved in the design, and the person that put it there had a reason (I hope). Anyway, I'm trying to straighten out this Big Ball of Mud. I have already come a long way by wrapping the web services by a Spring proxy, which allows me to do some before-and-after processing in an interceptor (before there were at least 20 lines of copy-pasted boiler plate per method). I'm wondering if there's some kind of "message header" I can apply to the method or package and that can be accessed by some type of handler or something outside of each web service method. Thanks in advance for the advice, LES

    Read the article

  • Debugging Actionmailer

    - by Trip
    I have actionmailer set up. Emails are not being sent, and no errors. Where can I start my search to debug this? class Notifier < ActionMailer::Base default_url_options[:host] = APP_DOMAIN def email_blast(user, subject, message) subject subject from NOTIFIER_EMAIL recipients user.email sent_on Time.zone.now body :user => user.first_name + ' ' + user.last_name, :message => message end I do get a return in my log that the email was sent, just no actual email goes through. Also the reason, that this is not working is because I switched form a cluster to a solo box and some server settings were overwritten. I suspect that is probably the reason why this is not working. Anyone know what specific server settings I would have to look at ? UPDATE: ActionMailer::Base.delivery_method = :sendmail config.action_mailer.default_url_options = { :host => "75.101.153.93" } I found this in my production.rb . This code was originally here when it worked. Again, I believe that there must be something missing on my server..I did a 'which sendmail' and it returned /usr/bin/sendmail , so I added this : config.action_mailer.raise_delivery_errors = false config.action_mailer.perform_deliveries = true config.action_mailer.sendmail_settings = { :location => '/usr/bin/sendmail', :arguments => '-i -t' } Redeployed, restarted the server, and tested it. No emails were sent. The production.log said something was sent : Processing MediaController#create_a_video (for 173.161.167.41 at 2010-06-03 11:58:13) [GET] Parameters: {"action"=>"create_a_video", "controller"=>"media", "organization_id"=>"470", "_"=>"1275591493194"} Sent mail to [email protected] Rendering media/create_a_video Completed in 128ms (View: 51, DB: 1) | 200 OK [http://invent.hqchannel.com/organizations/470/media/create_a_video?_=1275591493194]

    Read the article

  • Does running IIS7 in classic mode affect MVC output caching?

    - by Bob
    I have a need to run an application in classic mode for backwards compatibility with a specific application, and am trying to understand what kind of impact that will have on the performance of an MVC application that is running on the site. If we put a few static file maps (for .js, .css, .png, etc) above the ASP.NET wildcard map to reduce the amount of processing by the ASP.NET handler, will we be approaching the integrated mode in terms of performance? The thing i'm primarily concerned with is any effect this might have on output caching. I understand that integrated mode might (?) allow for the output cache to handle non ASP.NET content, but that isn't really a concern. We're more interested in ensuring that the MVC application has full use of the output cache. Empirically i've found that the two configurations operate on par when things go well, but if the page references resources that are not available, the integrated mode tends to fail much more quickly than the classic mode (e.g. 500 ms vs 10 seconds), reducing 'hang time' on the page load. Thanks for any feedback.

    Read the article

  • Best way to detect similar email addresses?

    - by Chris
    I have a list of ~20,000 email addresses, some of which I know to be fraudulent attempts to get around a "1 per e-mail" limit. ([email protected], [email protected], [email protected], etc...). I want to find similar email addresses for evaluation. Currently I'm using a levenshtein algorithm to check each e-mail against the others in the list and report any with an edit distance of less than 2. However, this is painstakingly slow. Is there a more efficient approach? The test code I'm using now is: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Threading; namespace LevenshteinAnalyzer { class Program { const string INPUT_FILE = @"C:\Input.txt"; const string OUTPUT_FILE = @"C:\Output.txt"; static void Main(string[] args) { var inputWords = File.ReadAllLines(INPUT_FILE); var outputWords = new SortedSet<string>(); for (var i = 0; i < inputWords.Length; i++) { if (i % 100 == 0) Console.WriteLine("Processing record #" + i); var word1 = inputWords[i].ToLower(); for (var n = i + 1; n < inputWords.Length; n++) { if (i == n) continue; var word2 = inputWords[n].ToLower(); if (word1 == word2) continue; if (outputWords.Contains(word1)) continue; if (outputWords.Contains(word2)) continue; var distance = LevenshteinAlgorithm.Compute(word1, word2); if (distance <= 2) { outputWords.Add(word1); outputWords.Add(word2); } } } File.WriteAllLines(OUTPUT_FILE, outputWords.ToArray()); Console.WriteLine("Found {0} words", outputWords.Count); } } }

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >