Search Results

Search found 6947 results on 278 pages for 'annotation processing'.

Page 234/278 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Python: Catching / blocking SIGINT during system call

    - by danben
    I've written a web crawler that I'd like to be able to stop via the keyboard. I don't want the program to die when I interrupt it; it needs to flush its data to disk first. I also don't want to catch KeyboardInterruptedException, because the persistent data could be in an inconsistent state. My current solution is to define a signal handler that catches SIGINT and sets a flag; each iteration of the main loop checks this flag before processing the next url. However, I've found that if the system happens to be executing socket.recv() when I send the interrupt, I get this: ^C Interrupted; stopping... // indicates my interrupt handler ran Traceback (most recent call last): File "crawler_test.py", line 154, in <module> main() ... File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 397, in readline data = recv(1) socket.error: [Errno 4] Interrupted system call and the process exits completely. Why does this happen? Is there a way I can prevent the interrupt from affecting the system call?

    Read the article

  • How to specify a parameter as part of every web service call?

    - by LES2
    Currently, each web service for our application has a user parameter that is added for every method. For example: @WebService public interface FooWebService { @WebMethod public Foo getFoo(@WebParam(name="alwaysHere",header=true,partName="alwaysHere") String user, @WebParam(name="fooId") Long fooId); @WebMethod public Result deletetFoo(@WebParam(name="alwaysHere",header=true,partName="alwaysHere") String user, @WebParam(name="fooId") Long fooId); // ... } There could be twenty methods in a service, each with the first parameter as user. And there could be twenty web services. We don't actually use the 'user' argument in the implementations - in fact, I don't know why it's there - but I wasn't involved in the design, and the person that put it there had a reason (I hope). Anyway, I'm trying to straighten out this Big Ball of Mud. I have already come a long way by wrapping the web services by a Spring proxy, which allows me to do some before-and-after processing in an interceptor (before there were at least 20 lines of copy-pasted boiler plate per method). I'm wondering if there's some kind of "message header" I can apply to the method or package and that can be accessed by some type of handler or something outside of each web service method. Thanks in advance for the advice, LES

    Read the article

  • SocialAuth.NET is not working

    - by Alen Joy
    I'm trying to use SocialAuth.NET for extract contacts from gmail and yahoo for my web application but when I run the WebformDemo the following error occurs Server Error in '/Demo' Application. Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Unrecognized attribute 'targetFramework'. Note that attribute names are case-sensitive. Source Error: Line 76: </authentication>--> Line 77: <!--<authentication mode="None"/>--> Line 78: <compilation debug="true" targetFramework="4.0"> Line 79: <assemblies> Line 80: <add assembly="Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> Source File: D:\test\SocialAuth-net-2.3\WebFormsDemo\web.config Line: 78 Version Information: Microsoft .NET Framework Version:2.0.50727.3053; ASP.NET Version:2.0.50727.3053 I'm using Windows XP and Visual Studio 2010. Any help?

    Read the article

  • How can I access mainframe data with .Net applications and SQL Queries?

    - by orandov
    We have a large amount of data stored on an IBM mainframe using VSAM files. A lot of this data is dropped on the network every night in the form of text files to be processed and dumped into FoxPro and SQL Server databases. There are also many text files produced nightly by custom applications that get uploaded to the mainframe to keep everything in sync. Keeping the everything in sync is very tricky, to say the least. We are not getting rid of the mainframe any time soon and we would like to replace all the nightly batch processing with real time access to the mainframe data. We would like to be able to: Read data directly from the mainframe and produce reports based on it. Possibly using SQL queries. Read and Write data from custom .Net applications. We are not looking for a new platform to interface with the mainframe like Information Builders offers. We don't want to build application modules or reports with new "Business Intelligence" tools. We already know how to generate reports and write custom applications using SQL,.Net, Visual Studio, etc. All we are looking for is some sort of adapter to connect to our mainframe data. Any ideas are appreciated.

    Read the article

  • python can't start a new thread

    - by Giorgos Komnino
    I am building a multi threading application. I have setup a threadPool. [ A Queue of size N and N Workers that get data from the queue] When all tasks are done I use tasks.join() where tasks is the queue . The application seems to run smoothly until suddently at some point (after 20 minutes in example) it terminates with the error thread.error: can't start new thread Any ideas? Edit: The threads are daemon Threads and the code is like: while True: t0 = time.time() keyword_statuses = DBSession.query(KeywordStatus).filter(KeywordStatus.status==0).options(joinedload(KeywordStatus.keyword)).with_lockmode("update").limit(100) if keyword_statuses.count() == 0: DBSession.commit() break for kw_status in keyword_statuses: kw_status.status = 1 DBSession.commit() t0 = time.time() w = SWorker(threads_no=32, network_server='http://192.168.1.242:8180/', keywords=keyword_statuses, cities=cities, saver=MySqlRawSave(DBSession), loglevel='debug') w.work() print 'finished' When the daemon threads are killed? When the application finishes or when the work() finishes? Look at the thread pool and the worker (it's from a recipe ) from Queue import Queue from threading import Thread, Event, current_thread import time event = Event() class Worker(Thread): """Thread executing tasks from a given tasks queue""" def __init__(self, tasks): Thread.__init__(self) self.tasks = tasks self.daemon = True self.start() def run(self): '''Start processing tasks from the queue''' while True: event.wait() #time.sleep(0.1) try: func, args, callback = self.tasks.get() except Exception, e: print str(e) return else: if callback is None: func(args) else: callback(func(args)) self.tasks.task_done() class ThreadPool: """Pool of threads consuming tasks from a queue""" def __init__(self, num_threads): self.tasks = Queue(num_threads) for _ in range(num_threads): Worker(self.tasks) def add_task(self, func, args=None, callback=None): ''''Add a task to the queue''' self.tasks.put((func, args, callback)) def wait_completion(self): '''Wait for completion of all the tasks in the queue''' self.tasks.join() def broadcast_block_event(self): '''blocks running threads''' event.clear() def broadcast_unblock_event(self): '''unblocks running threads''' event.set() def get_event(self): '''returns the event object''' return event

    Read the article

  • convert MsSql StoredPorcedure to MySql

    - by karthik
    I need to covert the following SP of MsSql To MySql. I am new to MySql.. Help needed. CREATE PROC InsertGenerator (@tableName varchar(100)) as --Declare a cursor to retrieve column specific information --for the specified table DECLARE cursCol CURSOR FAST_FORWARD FOR SELECT column_name,data_type FROM information_schema.columns WHERE table_name = @tableName OPEN cursCol DECLARE @string nvarchar(3000) --for storing the first half --of INSERT statement DECLARE @stringData nvarchar(3000) --for storing the data --(VALUES) related statement DECLARE @dataType nvarchar(1000) --data types returned --for respective columns SET @string='INSERT '+@tableName+'(' SET @stringData='' DECLARE @colName nvarchar(50) FETCH NEXT FROM cursCol INTO @colName,@dataType IF @@fetch_status<0 begin print 'Table '+@tableName+' not found, processing skipped.' close curscol deallocate curscol return END WHILE @@FETCH_STATUS=0 BEGIN IF @dataType in ('varchar','char','nchar','nvarchar') BEGIN SET @stringData=@stringData+'''''''''+ isnull('+@colName+','''')+'''''',''+' END ELSE if @dataType in ('text','ntext') --if the datatype --is text or something else BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(2000)),'''')+'''''',''+' END ELSE IF @dataType = 'money' --because money doesn't get converted --from varchar implicitly BEGIN SET @stringData=@stringData+'''convert(money,''''''+ isnull(cast('+@colName+' as varchar(200)),''0.0000'')+''''''),''+' END ELSE IF @dataType='datetime' BEGIN SET @stringData=@stringData+'''convert(datetime,''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+''''''),''+' END ELSE IF @dataType='image' BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast(convert(varbinary,'+@colName+') as varchar(6)),''0'')+'''''',''+' END ELSE --presuming the data type is int,bit,numeric,decimal BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+'''''',''+' END SET @string=@string+@colName+',' FETCH NEXT FROM cursCol INTO @colName,@dataType END

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

  • Programmatically controlled virtual drive

    - by Robert Lin
    How would I go about creating a virtual drive with which I can programmatically and dynamically change the contents? For instance, program A starts running and creates a virtual drive. When program B looks in the drive, it sees an error log and starts reading/processing it. In the middle of all this program A gets a signal from somewhere and decides to add to the log. I want program B to be unaware of the change and just keep on going. Program B should continue reading as if nothing happened. Program A would just report a rediculously large file size for the log and then fill it in as appropriate. Program A would fill the log with tags if program B tries to read past the last entry. I know this is a weird request but there's really no other way to do this... I basically can't rewrite program B so I need to fool it. How do I do this in windows? How about OSX?

    Read the article

  • 500 Worker Threads, what kind of thread pool?

    - by Submerged
    I am wondering if this is the best way to do this. I have about 500 threads that run indefinitely, but Thread.sleep for a minute when done one cycle of processing. ExecutorService es = Executors.newFixedThreadPool(list.size()+1); for (int i = 0; i < list.size(); i++) { es.execute(coreAppVector.elementAt(i)); //coreAppVector is a vector of extends thread objects } The code that is executing is really simple and basically just this class aThread extends Thread { public void run(){ while(true){ Thread.sleep(ONE_MINUTE); //Lots of computation every minute } } } I do need a separate threads for each running task, so changing the architecture isn't an option. I tried making my threadPool size equal to Runtime.getRuntime().availableProcessors() which attempted to run all 500 threads, but only let 8 (4xhyperthreading) of them execute. The other threads wouldn't surrender and let other threads have their turn. I tried putting in a wait() and notify(), but still no luck. If anyone has a simple example or some tips, I would be grateful! Well, the design is arguably flawed. The threads implement Genetic-Programming or GP, a type of learning algorithm. Each thread analyzes advanced trends makes predictions. If the thread ever completes, the learning is lost. That said, I was hoping that sleep() would allow me to share some of the resources while one thread isn't "learning"

    Read the article

  • JSP: How can I still get the code on my error page to run, even if I can't display it?

    - by Josh Hinman
    I've defined an error-page in my web.xml: <error-page> <exception-type>java.lang.Exception</exception-type> <location>/error.jsp</location> </error-page> In that error page, I have a custom tag that I created. The tag handler for this tag e-mails me the stacktrace of whatever error occurred. For the most part this works great. Where it doesn't work great is if the output has already begun being sent to the client at the time the error occurs. In that case, we get this: SEVERE: Exception Processing ErrorPage[exceptionType=java.lang.Exception, location=/error.jsp] java.lang.IllegalStateException I believe this error happens because we can't redirect a request to the error page after output has already started. The work-around I've used is to increase the buffer size on particularly large JSP pages. But I'm trying to write a generic error handler that I can apply to existing applications, and I'm not sure it's feasible to go through hundreds of JSP pages making sure their buffers are big enough. Is there a way to still allow my stack trace e-mail code to execute in this case, even if I can't actually display the error page to the client?

    Read the article

  • PDF to PNG Processor - Paperclip

    - by Josh Crowder
    I am trying to develop a system in which a user can upload a slideshow (pdf) and it'll export each slide as a png. After some digging around I came across a post on here that suggested using a processor. I've had a go, but I cant get the command to run, if it is running then I don't know what is happening because no errors are being shown. Any help would be appreciated! module Paperclip class Slides < Processor def initialize(file, options = {}, attachment = nill) super @file = file @instance = options[:instance] @current_format = File.extname(@file.path) @basename = File.basename(@file.path, @current_format) end def make dst = Tempfile.new( [ @basename, @format].compact.join(".")) dst.binmode command = <<-end_command -size 640x300 #{ File.expand_path(dst.path) } tester.png end_command begin success = Paperclip.run("convert", command.gsub(/\s+/, " "))) rescue PaperclipCommandLineError raise PaperclipError, "There was an error processing the thumbnail for #{@basename}" end end end end I think my problem is with the convert command... When I run that command by hand, it works but it doesn't give the details of each slide it just executes it. What I need to happen is once its made all the slides, pass back the data to a new model... or I know where all the slides are, but once I get to that point I'm not sure what todo.

    Read the article

  • Some Async Socket Code - Help with Garbage Collection?

    - by divinci
    Hi all, I think this question is really about my understanding of Garbage collection and variable references. But I will go ahead and throw out some code for you to look at. // Please note do not use this code for async sockets, just to highlight my question // SocketTransport // This is a simple wrapper class that is used as the 'state' object // when performing Async Socket Reads/Writes public class SocketTransport { public Socket Socket; public byte[] Buffer; public SocketTransport(Socket socket, byte[] buffer) { this.Socket = socket; this.Buffer = buffer; } } // Entry point - creates a SocketTransport, then passes it as the state // object when Asyncly reading from the socket. public void ReadOne(Socket socket) { SocketTransport socketTransport_One = new SocketTransport(socket, new byte[10]); socketTransport_One.Socket.BeginRecieve ( socketTransport_One.Buffer, // Buffer to store data 0, // Buffer offset 10, // Read Length SocketFlags.None // SocketFlags new AsyncCallback(OnReadOne), // Callback when BeginRead completes socketTransport_One // 'state' object to pass to Callback. ); } public void OnReadOne(IAsyncResult ar) { SocketTransport socketTransport_One = ar.asyncState as SocketTransport; ProcessReadOneBuffer(socketTransport_One.Buffer); // Do processing // New Read // Create another! SocketTransport (what happens to first one?) SocketTransport socketTransport_Two = new SocketTransport(socket, new byte[10]); socketTransport_Two.Socket.BeginRecieve ( socketTransport_One.Buffer, 0, 10, SocketFlags.None new AsyncCallback(OnReadTwo), socketTransport_Two ); } public void OnReadTwo(IAsyncResult ar) { SocketTransport socketTransport_Two = ar.asyncState as SocketTransport; .............. So my question is: The first SocketTransport to be created (socketTransport_One) has a strong reference to a Socket object (lets call is ~SocketA~). Once the async read is completed, a new SocketTransport object is created (socketTransport_Two) also with a strong reference to ~SocketA~. Q1. Will socketTransport_One be collected by the garbage collector when method OnReadOne exits? Even though it still contains a strong reference to ~SocketA~ Thanks all!

    Read the article

  • Yet another Haskell vs. Scala question

    - by Travis Brown
    I've been using Haskell for several months, and I love it—it's gradually become my tool of choice for everything from one-off file renaming scripts to larger XML processing programs. I'm definitely still a beginner, but I'm starting to feel comfortable with the language and the basics of the theory behind it. I'm a lowly graduate student in the humanities, so I'm not under a lot of institutional or administrative pressure to use specific tools for my work. It would be convenient for me in many ways, however, to switch to Scala (or Clojure). Most of the NLP and machine learning libraries that I work with on a daily basis (and that I've written in the past) are Java-based, and the primary project I'm working for uses a Java application server. I've been mostly disappointed by my initial interactions with Scala. Many aspects of the syntax (partial application, for example) still feel clunky to me compared to Haskell, and I miss libraries like Parsec and HXT and QuickCheck. I'm familiar with the advantages of the JVM platform, so practical questions like this one don't really help me. What I'm looking for is a motivational argument for moving to Scala. What does it do (that Haskell doesn't) that's really cool? What makes it fun or challenging or life-changing? Why should I get excited about writing it?

    Read the article

  • Need to close a popup after javascript completes image upload then load new page

    - by Michael Robinson
    I got a problem. I have a image upload script that runs on a popup up when you select "upload button", after it is complete instead of closing it stays in the uploader.html window. I need it to close and go to a new page. Here is some of the xml that the script uses, can I change the "urlOnUploadSucess" line to close the popup and load a new page on the original page I came from? .....<upload preload="Preloading images:" upload="Uploading images to server:" prepare="Processing and image compression:" of="of" cancel="Cancel" start="Start" warning_empty_required_field="Warning! One of the required fields is empty!" confirm="Will be uploaded:"/> <urls urlToUpload="upload.php?" urlOnUploadSuccess="http://www.baublet.com/purchase.html" urlOnUploadFail="http://www.baublet.com/tryagain.html" urlUpdateFlashPlayer="http://www.baublet.com/flashalternative.html" useMessageBoxesAfterUpload="false" messageOnUploadSuccess="Images were successfully uploaded!" messageOnUploadFail="Error! Images failed to upload!" jsFunctionNameOnUpload=""/> ..... Thanks, Michael

    Read the article

  • setting default value of superglobal

    - by Prasoon Saurav
    I have been working on a Timesheet Management website. I have my home page as index.php //index.php <?php session_start(); if($_SESSION['logged']=='set') { $x=$_SESSION['username']; echo '<div align="right">'; echo 'Welcome ' .$x.'<br/>'; echo'<a href="logout.php" class="links">&nbsp;<b><u>Logout</u></b></a>' ; } else if($_SESSION['logged']='unset') { echo'<form id="searchform" method="post" action="processing.php"> <div> <div align="right"> Username&nbsp;<input type="text" name="username" id="s" size="15" value="" /> &nbsp;Password&nbsp;<input type="password" name="pass" id="s" size="15" value="" /> <input type="submit" name="submit" value="submit" /> </div> <br /> </div> </form> '; } ?> The problem I am facing is that during the first run of this script I get an error Notice: Undefined index: logged in C:\wamp\www\ps\index.php but after refreshing the page the error vanishes. How can I correct this problem? logged is a variable which helps determine whether the user is logged in or not. When the user is logged in $_SESSION['logged'] is set, otherwise unset. I want the default value of $_SESSION['logged'] to be unset prior to the execution of the script. How can I solve this problem?

    Read the article

  • django: control json serialization

    - by abolotnov
    Is there a way to control json serialization in django? Simple code below will return serialized object in json: co = Collection.objects.all() c = serializers.serialize('json',co) The json will look similar to this: [ { "pk": 1, "model": "picviewer.collection", "fields": { "urlName": "architecture", "name": "\u0413\u043e\u0440\u043e\u0434 \u0438 \u0430\u0440\u0445\u0438\u0442\u0435\u043a\u0442\u0443\u0440\u0430", "sortOrder": 0 } }, { "pk": 2, "model": "picviewer.collection", "fields": { "urlName": "nature", "name": "\u041f\u0440\u0438\u0440\u043e\u0434\u0430", "sortOrder": 1 } }, { "pk": 3, "model": "picviewer.collection", "fields": { "urlName": "objects", "name": "\u041e\u0431\u044a\u0435\u043a\u0442\u044b \u0438 \u043d\u0430\u0442\u044e\u0440\u043c\u043e\u0440\u0442", "sortOrder": 2 } } ] You can see it's serializing it in a way that you are able to re-create the whole model, shall you want to do this at some point - fair enough, but not very handy for simple JS ajax in my case: I want bring the traffic to minimum and make the whole thing little clearer. What I did is I created a view that passes the object to a .json template and the template will do something like this to generate "nicer" json output: [ {% if collections %} {% for c in collections %} {"id": {{c.id}},"sortOrder": {{c.sortOrder}},"name": "{{c.name}}","urlName": "{{c.urlName}}"}{% if not forloop.last %},{% endif %} {% endfor %} {% endif %} ] This does work and the output is much (?) nicer: [ { "id": 1, "sortOrder": 0, "name": "????? ? ???????????", "urlName": "architecture" }, { "id": 2, "sortOrder": 1, "name": "???????", "urlName": "nature" }, { "id": 3, "sortOrder": 2, "name": "??????? ? ?????????", "urlName": "objects" } ] However, I'm bothered by the fast that my solution uses templates (an extra step in processing and possible performance impact) and it will take manual work to maintain shall I update the model, for example. I'm thinking json generating should be part of the model (correct me if I'm wrong) and done with either native python-json and django implementation but can't figure how to make it strip the bits that I don't want. One more thing - even when I restrict it to a set of fields to serialize, it will keep the id always outside the element container and instead present it as "pk" outside of it.

    Read the article

  • Zend_Date and Zend Locale; can't get it to work ><

    - by Rick de Graaf
    Loving Zend Framework, hating Zend_Date... I'm buidling an app where one of the functions is to track the time one spends on a certain task. This works great. My previous question was my incapability to get the sum of all the timestamps (time spent on each task). Well that works as a charm, but when I echo this value, it adds one hour. Using gmdate() (just for checking) and the value turns out exactly as it's supposed to. I thought to solve this problem easeliy with Zend_Local, but I can't get the damn thing to work! This is my bootstrap code: protected function _initLocale() { $locale = new Zend_Locale('nl_NL'); $locale->setLocale($locale); Zend_Registry::set('Zend_Locale', $locale); } This is the code in my view file: $date = new Zend_Date($this->timequery, null, $locale); echo $date->toString(Zend_Date::HOUR.':'.Zend_Date::MINUTE.':'.Zend_Date::SECOND); The timestamp I'm processing is: 2632 which equals 00:43:52. The output I get is: 01:43:52 I know the extra hour comes from the time difference, but I can seem to solve this with Zend_Local and Zend_Date.

    Read the article

  • simplify a preload image function in jQuery

    - by robertdd
    after i spent 2 day searching a preload function in jQueryi with no succes, i manage to do this: after i upload the image on server, in a list i get this: <li id="upimagesDYGONI"> <div class="percentage">100%</div> <div class="uploadifyProgress"> <div align=" center" class="uploadifyProgressBar" id="upimagesDYGONIProgressBar" style="width: 100%;"> <!--Progress Bar--> </div> </div> </li> with this function, i preload the image: $.getlastimage = function(id) { $.getJSON('operations.php', {'operation':'getli', 'id':id,}, function(lastimg){ $("#upimages" + id + " .percentage").text('processing'); $("#upimages" + id).append('<a href=""><img id="' + id + '" src="" alt="" /></a>') .parent().attr({"href":"uploads/"+ lastimg +'?'+ (new Date()).getTime()}); $("#"+id).hide().attr({"src":"uploads/"+ lastimg +'?'+ (new Date()).getTime(), "alt":lastimg}) .load(function() { $(this).show(); $("#upimages" + id + " .percentage").remove(); $("#upimages" + id + " .uploadifyProgress").remove(); }); }); } })(jQuery) and i get this: <li id="upimagesLRBHYN" style="" class=""> <a href="uploads/0002.jpg?1271901177027"> <img alt="0002.jpg" src="uploads/0002.jpg?1271901177028" id="LRBHYN" style="display: block;"> </a> </li> how i can simplify the function? i want to use the full power of jQuery!! any idea?

    Read the article

  • CentOS: make python 2.6 see django

    - by NP
    In a harrowing attempt to get mod_wsgi to run on CentOS 5.4, I've added python 2.6 as an optional library following the instructions here. The configuration seems fine except that when trying to ping the server the Apache log prints this error: mod_wsgi (pid=20033, process='otalo', application='127.0.0.1|'): Loading WSGI script '...django.wsgi'. [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] mod_wsgi (pid=20033): Target WSGI script '...django.wsgi' cannot be loaded as Python module. [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] mod_wsgi (pid=20033): Exception occurred processing WSGI script '...django.wsgi'. [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] Traceback (most recent call last): [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] File "...django.wsgi", line 8, in [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] import django.core.handlers.wsgi [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] ImportError: No module named django.core.handlers.wsgi when I go to my python2.6 install's command line and try 'import django', the module is not found (ImportError). However, my default python 2.4 installation (still working fine) is able to import successfully. How do I point python 2.6 to django? Thanks in advance.

    Read the article

  • Passing array values in an HTTP request in .NET

    - by Zarjay
    What's the standard way of passing and processing an array in an HTTP request in .NET? I have a solution, but I don't know if it's the best approach. Here's my solution: <form action="myhandler.ashx" method="post"> <input type="checkbox" name="user" value="Aaron" /> <input type="checkbox" name="user" value="Bobby" /> <input type="checkbox" name="user" value="Jimmy" /> <input type="checkbox" name="user" value="Kelly" /> <input type="checkbox" name="user" value="Simon" /> <input type="checkbox" name="user" value="TJ" /> <input type="submit" value="Submit" /> </form> The ASHX handler receives the "user" parameter as a comma-delimited string. You can get the values easily by splitting the string: public void ProcessRequest(HttpContext context) { string[] users = context.Request.Form["user"].Split(','); } So, I already have an answer to my problem: assign multiple values to the same parameter name, assume the ASHX handler receives it as a comma-delimited string, and split the string. My question is whether or not this is how it's typically done in .NET. What's the standard practice for this? Is there a simpler way to grab the multiple values than assuming that the value is comma-delimited and calling Split() on it? Is this how arrays are typically passed in .NET, or is XML used instead? Does anyone have any insight on whether or not this is the best approach?

    Read the article

  • SSIS - Bulk Update at Database Field Level

    - by Adam
    Hello, Here's our mission: Receive files from clients. Each file contains anywhere from 1 to 1,000,000 records. Records are loaded to a staging area and business-rule validation is applied. Valid records are then pumped into an OLTP database in a batch fashion, with the following rules: If record does not exist (we have a key, so this isn't an issue), create it. If record exists, optionally update each database field. The decision is made based on one of 3 factors...I don't believe it's important what those factors are. Our main problem is finding an efficient method of optionally updating the data at a field level. This is applicable across ~12 different database tables, with anywhere from 10 to 150 fields in each table (original DB design leaves much to be desired, but it is what it is). Our first attempt has been to introduce a table that mirrors the staging environment (1 field in staging for each system field) and contains a masking flag. The value of the masking flag represents the 3 factors. We've then put an UPDATE similar to... UPDATE OLTPTable1 SET Field1 = CASE WHEN Mask.Field1 = 0 THEN Staging.Field1 WHEN Mask.Field1 = 1 THEN COALESCE( Staging.Field1 , OLTPTable1.Field1 ) WHEN Mask.Field1 = 2 THEN COALESCE( OLTPTable1.Field1 , Staging.Field1 ) ... As you can imagine, the performance is rather horrendous. Has anyone tackled a similar requirement? We're a MS shop using a Windows Service to launch SSIS packages that handle the data processing. Unfortunately, we're pretty much novices at this stuff.

    Read the article

  • Problem in filtering records using Dataview (C#3.0)

    - by Newbie
    I have a data table . The data table is basically getting populated from excel sheet. And there are many excel sheets. Henceforth, I have written a utility method for accomplishing the same. Now in some of the excel sheets, there are date columns and in some it is not(only text/string). My function is populating the values properly into the datatable from the excell sheet. But there are many blank rows in the excel sheets some are filled with NULL , some with " ". So I need to filter those records (which are NULL or " " ) first before further processing. What I am after is to use a dataview and apply the filter over there. DataView dv = dataTable.DefaultView; dv.RowFilter = ColumnName + " <> ''"; Well by using metedata (GetOleDbSchemaTable(OleDbSchemaGuid.Columns, restrection)) I was able to get the column names from the excel sheet , so getting the column names is not an issue. But the problem is as I said in some Excel sheet there are date fileds some are not. So the Filter condition of the Dataview needs to be proper. If I apply the above logic, and if it encounters a Datafield, it is throwing error Cannot perform '<' operation on System.DateTime and System.String. Could you people please help me out? I need to filter columns(not known at compile time + their data types) which can have NULL and " " I am using C#3.0 Thanks

    Read the article

  • Search for a date between given ranges - Lotus

    - by Kris.Mitchell
    I have been trying to work out what is the best way to search for gather all of the documents in a database that have a certain date. Originally I was trying to use FTsearch or search to move through a document collection, but I changed over to processing a view and associated documents. My first question is what is the easiest way to spin through a set of documents and find if a date stored in the documents is greater than or less than a specified date? So, to continue working I implemented the following code. If (doc.creationDate(0) > cdat(parm1)) And (doc.creationDate(0) < CDat(parm2)) then ... end if but the results are off Included! Date:3/12/10 11:07:08 P1:3/1/10 P2: 3/5/10 Included! Date:3/13/10 9:15:09 P1:3/1/10 P2: 3/5/10 Included! Date:3/17/10 16:22:07P1:3/1/10 P2: 3/5/10 You can see that the date stored in the doc is not between P1 and P2. BUT! it does limit the documents with a date less than P1 correctly. So I won't get a result for a document with a date less than 3/1/10 If there isn't a better way than the if statement, can someone help me understand why the two examples from above are included?

    Read the article

  • mod_rewrite with location-based ACL in apache?

    - by Alexey
    Hi. There is a CGI-script that provides some API for our customers. Call syntax is: script.cgi?module=<str>&func=<str>[&other-options] The task is to make different authentiction rules for different modules. Optionally, it will be great to have nice URLs. My config: <VirtualHost *:80> DocumentRoot /var/www/example ServerName example.com # Global policy is to deny all <Location /> Order deny,allow Deny from all </Location> # doesn't work :( <Location /api/foo> Order deny,allow Deny from all Allow from 127.0.0.1 </Location> RewriteEngine On # The only allowed type of requests: RewriteRule /api/(.+?)/(.+) /cgi-bin/api.cgi?module=$1&func=$2 [PT] # All others are forbidden: RewriteRule /(.*) - [F] RewriteLog /var/log/apache2/rewrite.log RewriteLogLevel 5 ScriptAlias /cgi-bin /var/www/example <Directory /var/www/example> Options -Indexes AddHandler cgi-script .cgi </Directory> </VirtualHost> Well, I know that problem is order of processing that directives. <Location>s will be processed after mod_rewrite has done its work. But I believe there is a way to change it. :) Using of standard Order deny,allow + Allow from <something> directives is preferable because it's commonly used in other places like this. Thank you for your attention. :)

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >