Search Results

Search found 11812 results on 473 pages for 'word processing'.

Page 65/473 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Processing files with C# in folders whose names contain spaces

    - by Nigel Ainscoe
    There are plenty of C# samples that show how to manipulate files and directories but they inevitably use folder paths that contain no spaces. In the real world I need to be able to process files in folders with names that contain spaces. I have written the code below which shows how I have solved the problem. However it doesn't seem to be very elegant and I wonder if anyone has a better way. class Program { static void Main(string[] args) { var dirPath = @args[0] + "\\"; string[] myFiles = Directory.GetFiles(dirPath, "*txt"); foreach (var oldFile in myFiles) { string newFile = dirPath + "New " + Path.GetFileName(oldFile); File.Move(oldFile, newFile); } Console.ReadKey(); } } Regards, Nigel Ainscoe

    Read the article

  • Histogram matching - image processing - c/c++

    - by Raj
    Hello I have two histograms. int Hist1[10] = {1,4,3,5,2,5,4,6,3,2}; int Hist1[10] = {1,4,3,15,12,15,4,6,3,2}; Hist1's distribution is of type multi-modal; Hist2's distribution is of type uni-modal with single prominent peak. My questions are Is there any way that i could determine the type of distribution programmatically? How to quantify whether these two histograms are similar/dissimilar? Thanks

    Read the article

  • Issue with XSLT Processing on PHP

    - by monksy
    I'm getting a few errors from XSLTProcessor: XSLTProcessor::transformToDoc() [<a href='function.XSLTProcessor-transformToDoc'>function.XSLTProcessor-transformToDoc</a>]: Invalid or inclomplete context XSLTProcessor::transformToDoc() [<a href='function.XSLTProcessor-transformToDoc'>function.XSLTProcessor-transformToDoc</a>]: XSLTProcessor::transformToDoc() [<a href='function.XSLTProcessor-transformToDoc'>function.XSLTProcessor-transformToDoc</a>]: xsltValueOf: text copy failed in Which is parsing this XSLT Line: <xsl:apply-templates select="page/sections/section" mode="subset"/> The section is: <xsl:template match="page/sections/section" mode="subset"> <a href="#{shorttitle}"> <xsl:value-of select="title"/> </a> <xsl:if test="position() != last()"> | </xsl:if> </xsl:template> The XML that the section is parsing is: <shorttitle>About</shorttitle> <title>#~ About</title> The PHP XSLT Code is: $xslt = new XSLTProcessor(); $XSL = new DOMDocument(); $XSL->load( $xsltFile, LIBXML_NOCDATA); $xslt->importStylesheet( $XSL ); print $xslt->transformToXML( $XML ); My suspicion about the the errors is due to content. I'm not getting these errors with Firefox's XSLT rendering, nor am I getting an invalid XML document on the backend.I'm not getting errors on the load, its just on the transformToXML function. Does anyone have a clue on how to solve this? This is with PHP5.

    Read the article

  • Alternative for PHPlivedocx?

    - by Derek
    Hello, Is there any other free alternatives to PHPliveDocx? I would like to create a Word document based on templates and user inputted data. The template will resides on the server and the client(online or c# windows application) will be used to collect user input. Once data has been collected, server-side script(PHP) will be used to generate a Word document. So far I have only found phplivedocx. However I'm not very comfortable at consuming web services from a server that I have no control over with (am I worrying too much?). I have also thought about using client to do the work (c# windows application, Open Office XML). But I'm not sure that's the right way of doing things. Any guidance/help will be really appreciated! Thanks!

    Read the article

  • ASP.NET MVC Controller parameter processing

    - by Leonardo
    In my application I have a string parameter called "shop" that is required in all controllers, but it needs to be transformed using code like this: shop = shop.Replace("-", " ").ToLower(); How can I do this globally for all controllers without repeating this line in over and over? Thanks, Leo

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • Mono and Apache are serving files with no ASP.NET processing

    - by dnord
    On a new Rackspace Cloud Server box (Ubuntu 9.10), I've installed apache2, libapache2-mod-mono, and mod-mono-server2. I've disabled mod_mono and enabled mod_mono_auto, but whatever I do, requests for Default.aspx return the actual contents of Default.aspx (in this case, "This is a marker file generated by the precompilation tool, and should not be deleted!") I've installed XSP, and it looks like it works okay, but I'd like to use Apache with mod_mono (seems a more common configuration) if I can get it running. However, this is no error messages and no hints, with Google obviously not terribly helpful. What else can I look for to make sure I'm configured correctly? How can I test further?

    Read the article

  • Spring's JMS Design Question : Decouple processing of messages

    - by java_pill
    I'm using a message listener to process some messages from MQ based on Spring's DefaultMessageListenerContainer. After I receive a message, I have to make a Web Service (WS) call. However, I don't want to do this in the onMessage method because it would block the onMessage method until the invocation of WS is successful and this introduces latency in dequeuing of messages from the queue. How can I decouple the invocation of the Web Service by calling it outside of the onMesage method or without impacting the dequeuing of messages? Thanks,

    Read the article

  • Image processing in a multhithreaded mode using Java

    - by jadaaih
    Hi Folks, I am supposed to process images in a multithreaded mode using Java. I may having varying number of images where as my number of threads are fixed. I have to process all the images using the fixed set of threads. I am just stuck up on how to do it, I had a look ThreadExecutor and BlockingQueues etc...I am still not clear. What I am doing is, - Get the images and add them in a LinkedBlockingQueue which has runnable code of the image processor. - Create a threadpoolexecutor for which one of the arguements is the LinkedBlockingQueue earlier. - Iterate through a for loop till the queue size and do a threadpoolexecutor.execute(linkedblockingqueue.poll). - all i see is it processes only 100 images which is the minimum thread size passed in LinkedBlockingQueue size. I see I am seriously wrong in my understanding somewhere, how do I process all the images in sets of 100(threads) until they are all done? Any examples or psuedocodes would be highly helpful Thanks! J

    Read the article

  • Processing file uploads before object is saved

    - by Dominic Rodger
    I've got a model like this: class Talk(BaseModel): title = models.CharField(max_length=200) mp3 = models.FileField(upload_to = u'talks/', max_length=200) seconds = models.IntegerField(blank = True, null = True) I want to validate before saving that the uploaded file is an MP3, like this: def is_mp3(path_to_file): from mutagen.mp3 import MP3 audio = MP3(path_to_file) return not audio.info.sketchy Once I'm sure I've got an MP3, I want to save the length of the talk in the seconds attribute, like this: audio = MP3(path_to_file) self.seconds = audio.info.length The problem is, before saving, the uploaded file doesn't have a path (see this ticket, closed as wontfix), so I can't process the MP3. I'd like to raise a nice validation error so that ModelForms can display a helpful error ("You idiot, you didn't upload an MP3" or something). Any idea how I can go about accessing the file before it's saved? p.s. If anyone knows a better way of validating files are MP3s I'm all ears - I also want to be able to mess around with ID3 data (set the artist, album, title and probably album art, so I need it to be processable by mutagen).

    Read the article

  • Processing RSS/RDF via xml.dom.minidom

    - by Bill
    I'm trying to process a delicious rss feed via python. Here's a sample: ... <item rdf:about="http://weblist.me/"> <title>WebList - The Place To Find The Best List On The Web</title> <dc:date>2009-12-24T17:46:14Z</dc:date> <link>http://weblist.me/</link> ... </item> <item rdf:about="http://thumboo.com/"> <title>Thumboo! Free Website Thumbnails and PHP Script to Generate Web Screenshots</title> <dc:date>2006-10-24T18:11:32Z</dc:date> <link>http://thumboo.com/</link> ... The relevant code is: def getText(nodelist): rc = "" for node in nodelist: if node.nodeType == node.TEXT_NODE: rc = rc + node.data return rc dom = xml.dom.minidom.parse(file) items = dom.getElementsByTagName("item") for i in items: title = i.getElementsByTagName("title") print getText(title) I would think this would print out each title, but instead I get basically get blank output. I'm sure I'm doing something stupid wrong, but no idea what?

    Read the article

  • MapReduce in DryadLINQ and PLINQ

    - by JoshReuben
    MapReduce See http://en.wikipedia.org/wiki/Mapreduce The MapReduce pattern aims to handle large-scale computations across a cluster of servers, often involving massive amounts of data. "The computation takes a set of input key/value pairs, and produces a set of output key/value pairs. The developer expresses the computation as two Func delegates: Map and Reduce. Map - takes a single input pair and produces a set of intermediate key/value pairs. The MapReduce function groups results by key and passes them to the Reduce function. Reduce - accepts an intermediate key I and a set of values for that key. It merges together these values to form a possibly smaller set of values. Typically just zero or one output value is produced per Reduce invocation. The intermediate values are supplied to the user's Reduce function via an iterator." the canonical MapReduce example: counting word frequency in a text file.     MapReduce using DryadLINQ see http://research.microsoft.com/en-us/projects/dryadlinq/ and http://connect.microsoft.com/Dryad DryadLINQ provides a simple and straightforward way to implement MapReduce operations. This The implementation has two primary components: A Pair structure, which serves as a data container. A MapReduce method, which counts word frequency and returns the top five words. The Pair Structure - Pair has two properties: Word is a string that holds a word or key. Count is an int that holds the word count. The structure also overrides ToString to simplify printing the results. The following example shows the Pair implementation. public struct Pair { private string word; private int count; public Pair(string w, int c) { word = w; count = c; } public int Count { get { return count; } } public string Word { get { return word; } } public override string ToString() { return word + ":" + count.ToString(); } } The MapReduce function  that gets the results. the input data could be partitioned and distributed across the cluster. 1. Creates a DryadTable<LineRecord> object, inputTable, to represent the lines of input text. For partitioned data, use GetPartitionedTable<T> instead of GetTable<T> and pass the method a metadata file. 2. Applies the SelectMany operator to inputTable to transform the collection of lines into collection of words. The String.Split method converts the line into a collection of words. SelectMany concatenates the collections created by Split into a single IQueryable<string> collection named words, which represents all the words in the file. 3. Performs the Map part of the operation by applying GroupBy to the words object. The GroupBy operation groups elements with the same key, which is defined by the selector delegate. This creates a higher order collection, whose elements are groups. In this case, the delegate is an identity function, so the key is the word itself and the operation creates a groups collection that consists of groups of identical words. 4. Performs the Reduce part of the operation by applying Select to groups. This operation reduces the groups of words from Step 3 to an IQueryable<Pair> collection named counts that represents the unique words in the file and how many instances there are of each word. Each key value in groups represents a unique word, so Select creates one Pair object for each unique word. IGrouping.Count returns the number of items in the group, so each Pair object's Count member is set to the number of instances of the word. 5. Applies OrderByDescending to counts. This operation sorts the input collection in descending order of frequency and creates an ordered collection named ordered. 6. Applies Take to ordered to create an IQueryable<Pair> collection named top, which contains the 100 most common words in the input file, and their frequency. Test then uses the Pair object's ToString implementation to print the top one hundred words, and their frequency.   public static IQueryable<Pair> MapReduce( string directory, string fileName, int k) { DryadDataContext ddc = new DryadDataContext("file://" + directory); DryadTable<LineRecord> inputTable = ddc.GetTable<LineRecord>(fileName); IQueryable<string> words = inputTable.SelectMany(x => x.line.Split(' ')); IQueryable<IGrouping<string, string>> groups = words.GroupBy(x => x); IQueryable<Pair> counts = groups.Select(x => new Pair(x.Key, x.Count())); IQueryable<Pair> ordered = counts.OrderByDescending(x => x.Count); IQueryable<Pair> top = ordered.Take(k);   return top; }   To Test: IQueryable<Pair> results = MapReduce(@"c:\DryadData\input", "TestFile.txt", 100); foreach (Pair words in results) Debug.Print(words.ToString());   Note: DryadLINQ applications can use a more compact way to represent the query: return inputTable         .SelectMany(x => x.line.Split(' '))         .GroupBy(x => x)         .Select(x => new Pair(x.Key, x.Count()))         .OrderByDescending(x => x.Count)         .Take(k);     MapReduce using PLINQ The pattern is relevant even for a single multi-core machine, however. We can write our own PLINQ MapReduce in a few lines. the Map function takes a single input value and returns a set of mapped values àLINQ's SelectMany operator. These are then grouped according to an intermediate key à LINQ GroupBy operator. The Reduce function takes each intermediate key and a set of values for that key, and produces any number of outputs per key à LINQ SelectMany again. We can put all of this together to implement MapReduce in PLINQ that returns a ParallelQuery<T> public static ParallelQuery<TResult> MapReduce<TSource, TMapped, TKey, TResult>( this ParallelQuery<TSource> source, Func<TSource, IEnumerable<TMapped>> map, Func<TMapped, TKey> keySelector, Func<IGrouping<TKey, TMapped>, IEnumerable<TResult>> reduce) { return source .SelectMany(map) .GroupBy(keySelector) .SelectMany(reduce); } the map function takes in an input document and outputs all of the words in that document. The grouping phase groups all of the identical words together, such that the reduce phase can then count the words in each group and output a word/count pair for each grouping: var files = Directory.EnumerateFiles(dirPath, "*.txt").AsParallel(); var counts = files.MapReduce( path => File.ReadLines(path).SelectMany(line => line.Split(delimiters)), word => word, group => new[] { new KeyValuePair<string, int>(group.Key, group.Count()) });

    Read the article

  • PHP or Javascript or other - Draw simple shapes onto images?

    - by Tommo
    I basically have an image of a world map and i would like to place a pin image at a specified pixel co-ordinate ontop of this world map image. It's for a website, so ideally the solution should be in PHP or Javascript (i'm avoiding Java and Flash as i want it to be as simple as possible). I had a look at the processing.js library but it is way to big and bloated for just performing this simple task. Is there a pre-existing Javascript function which will allow me to do this? Or a more simple javascript library that i can use? (processing.js was a bit too advanced for me, i couldnt get it working lol) In terms of a PHP solution, i would prefer taking the load off the server and onto the client for this task, but i would still like to hear methods for doing it in PHP if they are suitable. Thanks!

    Read the article

  • Processing XML form input in ASP

    - by Omar Kooheji
    I'm maintaining a legacy application which consists of some ASP.Net pages with c# code behinds and some asp pages. I need to change the way the application accepts it's input from reading a set of parameters from some form fields to reading in one form field which contains contains some XML and parsing to get the parameters out. I've written a C# class that takes an The NameValueCollection from the C# HttpRequest's Form Element. Like so NameValueCollection form = Request.Form; Dictionary<string, string> fieldDictionary = RequestDataExtractor.BuildFieldDictionary(form); The code in the class looks for a particular parameter and if it's there processes the XML and outputs a Dictionary, if its not there it just cycles through the Form parameters and puts them all into the dictionary (Allowing the old method to still work) How would I do this in ASP? Can I use my same class, or a modified version of it? or do I have to write some new code to get this working? If I have to write ASP code Whats the best way to process the XML in ASP? Sorry if this seems like a stupid question but I know next to nothing about ASP and VB.

    Read the article

  • Pipeline For Downloading and Processing Files In Unix/Linux Environment With Perl

    - by neversaint
    I have a list of files URLS where I want to download them: http://somedomain.com/foo1.gz http://somedomain.com/foo2.gz http://somedomain.com/foo3.gz What I want to do is the following for each file: Download foo1,2.. in parallel with wget and nohup. Every time it complete download process them with myscript.sh What I have is this: #! /usr/bin/perl @files = glob("foo*.gz"); foreach $file (@files) { my $downurls = "http://somedomain.com/".$file; system("nohup wget $file &"); system("./myscript.sh $file >> output.txt"); } The problem is that I can't tell the above pipeline when does the file finish downloading. So now it myscript.sh doesn't get executed properly. What's the right way to achieve this?

    Read the article

  • replace html tags within xml content with wordML formatting tags

    - by Josh
    I am taking an XML document and creating a word document using XSLT and OpenXML. The problem is that when I create the word document, all of the HTML that is within the CDATA tags are not escaped and look like this: GET /recipe/recipe/cat.php/&gt;&quot;&gt;&lt;script&gt;alert(document.domain)&lt;/script&gt; I have tried defining "cdata-section-elements" in my xsl:output; however I receive an error stating that p tag doesn't match the w:t tag.(the p tag is apart of the CDATA HTML). Here is what one of my xsl templates looks like: <xsl:template match="SECTION"> <w:p w:rsidR="00272D24" w:rsidRPr="00272D24" w:rsidRDefault="00272D24"> <w:pPr> <w:rPr> <w:rFonts w:ascii="Arial" w:hAnsi="Arial" w:cs="Arial"/> </w:rPr> </w:pPr> </xsl:template> <w:r w:rsidRPr="00272D24"> <w:rPr> <w:rFonts w:ascii="Arial" w:hAnsi="Arial" w:cs="Arial"/> </w:rPr> <w:t> <xsl:value-of select="INFORMATION"/> </w:t> </w:r> </w:p> Here is what the xml looks like: <INFORMATION> <![CDATA[ <P> line 1 of information <P> line 2 of information.......]]> </INFORMATION> Here is what the word output looks like: (white space and poor formatting) DIAGNOSIS: <P> line 1 of information. <P> line 2 of information I need to be able to somehow render the HTML or strip out the HTML. If I strip out the HTML then I would have to search for every possible HTML element, which is madness! Any help at all would be appreciated... Thanks.

    Read the article

  • Out of the box approach to upload XML file to the BIRT Server for Processing

    - by Paul
    Hello, I have the BIRT Report Server configured in TOMCAT and it works fine when running reports that require an XML datasource, but that XML file has be available on the network in order for the server to find it and run. Is there an out of the box configuration in the BIRT server that will prompt the user to upload the XML file directly to the server when they try to run a given report that requires an XML data source? This would be handy for users that have the XML datasource stored locally on their C drive and not have to move them to a network server in order to be read by BIRT. Thanks in advance. Paul

    Read the article

  • Processing image data in java me

    - by Trimack
    Hi, I want to process raw data from a picture taken in java me with byte[] snap = videoControl.getSnapshot(encoding); My question is, whether I should try working already with snap or should I first create the image from this array Image im = Image.createImage(snap, 0, snap.length); and then work with that? Or is there some better documentation of both methods, i.e. getSnapshot and createImage than the Java API reference?

    Read the article

  • WPF calls not working during long method processing

    - by Colin Rouse
    Hi, The following method does not apply the wpf changes (background = red) until the 2nd method (DoWork) exits: private void change() { Background = Brushes.Red; Dispatcher.BeginInvoke((Action) DoWork); } DoWork() takes several seconds to run and I don't really want to put it into a thread, as this code will be used in several places and will probably interact will the Dispatcher thread at various intervals. I've tried calling the Invalidate...() methods, but to no avail. The BeginInvoke() was added to see if the delay would allow the background change to be applied before the logic was called. Typically, the logic would be part of this method. Btw, most of the logic is performed on a different thread and shouldn't block the Dispatcher thread?! Can someone please help? Thanks

    Read the article

  • help merging perl code routines together for file processing

    - by jdamae
    I need some perl help in putting these (2) processes/code to work together. I was able to get them working individually to test, but I need help bringing them together especially with using the loop constructs. I'm not sure if I should go with foreach..anyways the code is below. Also, any best practices would be great too as I'm learning this language. Thanks for your help. Here's the process flow I am looking for: -read a directory -look for a particular file -use the file name to strip out some key information to create a newly processed file -process the input file -create the newly processed file for each input file read (if i read in 10, I create 10 new files) Sample Recs: col1,col2,col3,col4,col5 [email protected],[email protected],8,2009-09-24 21:00:46,1 [email protected],[email protected],16,2007-08-18 22:53:12,33 [email protected],[email protected],16,2007-08-18 23:41:23,33 Here's my test code: Target Filetype: `/backups/test/foo101.name.aue-foo_p002.20110124.csv` Part 1: my $target_dir = "/backups/test/"; opendir my $dh, $target_dir or die "can't opendir $target_dir: $!"; while (defined(my $file = readdir($dh))) { next if ($file =~ /^\.+$/); #Get filename attributes if ($file =~ /^foo(\d{3})\.name\.(\w{3})-foo_p(\d{1,4})\.\d+.csv$/) { print "$1\n"; print "$2\n"; print "$3\n"; } print "$file\n"; } Part 2: use strict; use Digest::MD5 qw(md5_hex); #Create new file open (NEWFILE, ">/backups/processed/foo$1.name.$2-foo_p$3.out") || die "cannot create file"; my $data = ''; my $line1 = <>; chomp $line1; my @heading = split /,/, $line1; my ($sep1, $sep2, $eorec) = ( "^A", "^E", "^D"); while (<>) { my $digest = md5_hex($data); chomp; my (@values) = split /,/; my $extra = "__mykey__$sep1$digest$sep2" ; $extra .= "$heading[$_]$sep1$values[$_]$sep2" for (0..scalar(@values)); $data .= "$extra$eorec"; print NEWFILE "$data"; } #print $data; close (NEWFILE);

    Read the article

  • Complex Event Processing with C#

    - by Soham
    Hi All, Can you suggest me a possible way to get started with CEP in C# ? By what I mean when I say, get started: A good book talking about CEP and C# A library which deals event clouds Some sample codes using the library Some good quality codes in general to get a possible feel of the problems Good blogs Anything else you might feel necessary to add for someone getting started in CEP and C# will be helpful. Thanks Soham

    Read the article

  • what is the idea behind scaling an image using lanczos?

    - by banister
    Hi, I'm interested in image scaling algorithms and have implemented the bilinear and bicubic methods. However, I have heard of the lanczos and other more sophisticated methods for even higher quality image scaling and I am very curious how they work. Could someone here explain the basic idea behind scaling an image using lanczos (both upscaling and downscaling) and why it results in higher quality? I do have a background in fourier analysis and have done some signal processing stuff in the past, but not with relation to image processing, so don't be afraid to use terms like "frequency response" and such in your answer :) EDIT: I guess what i really want to know is the concept and theory behind using a convolution filter for interpolation. (Note: i have already read the wikipedia article on lanczos resampling but it didn't have nearly enough detail for me) thanks alot!

    Read the article

  • Processing command-line arguments in prefix notation in Python

    - by ejm
    I'm trying to parse a command-line in Python which looks like the following: $ ./command -o option1 arg1 -o option2 arg2 arg3 In other words, the command takes an unlimited number of arguments, and each argument may optionally be preceded with an -o option, which relates specifically to that argument. I think this is called a "prefix notation". In the Bourne shell I would do something like the following: while test -n "$1" do if test "$1" = '-o' then option="$2" shift 2 fi # Work with $1 (the argument) and $option (the option) # ... shift done Looking around at the Bash tutorials, etc. this seems to be the accepted idiom, so I'm guessing Bash is optimized to work with command-line arguments this way. Trying to implement this pattern in Python, my first guess was to use pop(), as this is basically a stack operation. But I'm guessing this won't work as well on Python because the list of arguments in sys.argv is in the wrong order and would have to be processed like a queue (i.e. pop from the left). I've read that lists are not optimized for use as queues in Python. So, my ideas are: convert argv to a collections.deque and use popleft(), reverse argv using reverse() and use pop(), or maybe just work with the int list indices themselves. Does anyone know of a better way to do this, otherwise which of my ideas would be best-practise in Python?

    Read the article

  • how to intercept processing when Session.IsNewSession is true

    - by Cen
    I have a small 4-page application for my customers to work through. They fill out information. If they let it sit too long, and the Session timeout out, I want to pop up a javascript alert that their session has expired, and that they need to start over. At that point, then redirected to the beginning page of the application. I'm getting some strange behavior. I'm stepping through code, forcing my Sessioni.IsNewSession to be true. At this point, I write out a call to Javascript to a Literal Control placed at the bottom of the . The javascript is called, and the redirection occurs. However, what is happening is.. I am pressing a button which is more or less a "Next Page" button and triggering this code. The next page is being displayed, and then the Alert and redirection occurs. The result I was expecting was to stay on the same page I received the "Timeout", with the alert to pop-up over it, then redirection. I'm checking for Session.IsNewSession in a BaseClass for these pages, overriding the OnInit event. Any ideas why I am getting this behavior? Thanks!

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >