Search Results

Search found 11458 results on 459 pages for 'perl thread'.

Page 122/459 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • C# Process Binary File, Multi-Thread Processing

    - by washtik
    I have the following code that processes a binary file. I want to split the processing workload by using threads and assigning each line of the binary file to threads in the ThreadPool. Processing time for each line is only small but when dealing with files that might contain hundreds of lines, it makes sense to split the workload. My question is regarding the BinaryReader and thread safety. First of all, is what I am doing below acceptable. I have a feeling it would be better to pass only the binary for each line to the PROCESS_Binary_Return_lineData method. Please note the code below is conceptual. I looking for a but of guidance on this as my knowledge of multi-threading is in its infancy. Perhaps there is a better way to achieve the same result, i.e. split processing of each binary line. var dic = new Dictionary<DateTime, Data>(); var resetEvent = new ManualResetEvent(false); using (var b = new BinaryReader(File.Open(Constants.dataFile, FileMode.Open, FileAccess.Read, FileShare.Read))) { var lByte = b.BaseStream.Length; var toProcess = 0; while (lByte >= DATALENGTH) { b.BaseStream.Position = lByte; lByte = lByte - AB_DATALENGTH; ThreadPool.QueueUserWorkItem(delegate { Interlocked.Increment(ref toProcess); var lineData = PROCESS_Binary_Return_lineData(b); lock(dic) { if (!dic.ContainsKey(lineData.DateTime)) { dic.Add(lineData.DateTime, lineData); } } if (Interlocked.Decrement(ref toProcess) == 0) resetEvent.Set(); }, null); } } resetEvent.WaitOne();

    Read the article

  • Thread-safe data structure design

    - by Inso Reiges
    Hello, I have to design a data structure that is to be used in a multi-threaded environment. The basic API is simple: insert element, remove element, retrieve element, check that element exists. The structure's implementation uses implicit locking to guarantee the atomicity of a single API call. After i implemented this it became apparent, that what i really need is atomicity across several API calls. For example if a caller needs to check the existence of an element before trying to insert it he can't do that atomically even if each single API call is atomic: if(!data_structure.exists(element)) { data_structure.insert(element); } The example is somewhat awkward, but the basic point is that we can't trust the result of "exists" call anymore after we return from atomic context (the generated assembly clearly shows a minor chance of context switch between the two calls). What i currently have in mind to solve this is exposing the lock through the data structure's public API. This way clients will have to explicitly lock things, but at least they won't have to create their own locks. Is there a better commonly-known solution to these kinds of problems? And as long as we're at it, can you advise some good literature on thread-safe design? EDIT: I have a better example. Suppose that element retrieval returns either a reference or a pointer to the stored element and not it's copy. How can a caller be protected to safely use this pointer\reference after the call returns? If you think that not returning copies is a problem, then think about deep copies, i.e. objects that should also copy another objects they point to internally. Thank you.

    Read the article

  • Two column layout, navigation div on the right, solution from previous thread didn't seem to work

    - by Tom
    I tried the solution from this thread, but I must be missing something because it doesn't work: <div style="float:left;margin-right:200px"> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p> </div> <div style="float:right;width:200px"> <p>navigation</p> </div> It works when the text in the content div (the left one) is short, but when it's long then the div takes up the whole width of the browser and the margin is there, but the right div is pushed below the first one nevertheless. What am I missing? Edit: The goal is to have a fix sized navigation column on the right of the browser window and the left div should get all the space left by the right navigation column (liquid layout).

    Read the article

  • Reload images in a UIWebView after they have been downloaded by a background thread

    - by dantastic
    I have an application that frequently checks in with a server and downloads a batch of articles to the iphone. The articles are in html and just stored using core data. An article has 0-n images on the page. Downloading all associated images at the same time as the text will be too slow and take too much bandwidth. Users are not likely to open every article. If they open an article once it is likely they will open it several times. So I want to download and store the images locally when they are needed. These articles are listed in a UITableView. When you tap an article you pop open a UIWebView that displays the article. I have a function that checks if I have downloaded the images associated with the article already. If I have I just pop open the the UIWebView - everything works fine. If I don't have the images downloaded I go off and download them and store them to my Documents directory. Although this i working, the app is hanging while the images are downloading. Not very tidy. I want the article to open in a snap and download the images with the article open. So what I've done is I check if the images are downloaded, if they aren't I go ahead and just "touch" the files I need and load the webview. The UIWebView opens up but the images referenced contain no data. Then in a background thread I download the images and overwrite the "dummy" ones. This will save the images and everything but it won't reload the images in my current UIWebView. I have to go back out of the article back back in again to see the images. Are there any ways around this? reloading just an image in a UIWebView?

    Read the article

  • How do I get events to execute on the "main thread"

    - by William DiStefano
    I have 2 classes, one is frmMain a windows form and the other is a class in vb.NET 2003. frmMain contains a start button which executes the monitor function in the other class. My question is, I am manually adding the event handlers -- when the events are executed how do I get them to excute on the "main thread". Because when the tooltip balloon pops up at the tray icon it displays a second tray icon instead of popping up at the existing tray icon. I can confirm that this is because the events are firing on new threads because if I try to display a balloon tooltip from frmMain it will display on the existing tray icon. Here is a part of the second class (not the entire thing): Friend Class monitorFolders Private _watchFolder As System.IO.FileSystemWatcher = New System.IO.FileSystemWatcher Private _eType As evtType Private Enum evtType changed created deleted renamed End Enum Friend Sub monitor(ByVal path As String) _watchFolder.Path = path 'Add a list of Filter to specify _watchFolder.NotifyFilter = IO.NotifyFilters.DirectoryName _watchFolder.NotifyFilter = _watchFolder.NotifyFilter Or IO.NotifyFilters.FileName _watchFolder.NotifyFilter = _watchFolder.NotifyFilter Or IO.NotifyFilters.Attributes 'Add event handlers for each type of event that can occur AddHandler _watchFolder.Changed, AddressOf change AddHandler _watchFolder.Created, AddressOf change AddHandler _watchFolder.Deleted, AddressOf change AddHandler _watchFolder.Renamed, AddressOf Rename 'Start watching for events _watchFolder.EnableRaisingEvents = True End Sub Private Sub change(ByVal source As Object, ByVal e As System.IO.FileSystemEventArgs) If e.ChangeType = IO.WatcherChangeTypes.Changed Then _eType = evtType.changed notification() End If If e.ChangeType = IO.WatcherChangeTypes.Created Then _eType = evtType.created notification() End If If e.ChangeType = IO.WatcherChangeTypes.Deleted Then _eType = evtType.deleted notification() End If End Sub Private Sub notification() _mainForm.NotifyIcon1.ShowBalloonTip(500, "Folder Monitor", "A file has been " + [Enum].GetName(GetType(evtType), _eType), ToolTipIcon.Info) End Sub End Class

    Read the article

  • Calling into a saved java object via JNI from a different thread

    - by Drake Amara
    I have a java object which calls into a C++ shared object via JNI. In C++, I am saving a reference to the JNIEnv and jObject. JavaVM * jvm; JNIEnv * myEnv; jobject myobj; JNIEXPORT void JNICALL Java_org_api_init (JNIEnv *env, jobject jObj) { myEnv = env; myobj = jObj; } I also have a GLSurface renderer and it eventually calls the C++ shared object mentioned above on a different thread, the GLThread. I am then trying to call back into my original Java object using the jobject I saved initially, but I think because I am on the GLThread, I get the following error. W/dalvikvm(16101): JNI WARNING: 0x41ded218 is not a valid JNI reference I/dalvikvm(16101): "GLThread 981" prio=5 tid=15 RUNNABLE I/dalvikvm(16101): | group="main" sCount=0 dsCount=0 obj=0x41d6e220 self=0x5cb11078 I/dalvikvm(16101): | sysTid=16133 nice=0 sched=0/0 cgrp=apps handle=1555429136 I/dalvikvm(16101): | schedstat=( 0 0 0 ) utm=42 stm=32 core=1 The code calling back into Java : void setData() { jvm->AttachCurrentThread(&myEnv, 0); jclass javaClass = myEnv->FindClass("com/myapp/myClass"); if(javaClass == NULL){ LOGD("ERROR - cant find class"); } jmethodID method = myEnv->GetMethodID(javaClass, "updateDataModel", "()V"); if(method == NULL){ LOGD("ERROR - cant access method"); } // this works, but its a new java object //jobject myobj2 = myEnv->NewObject(javaClass, method); //this is where the crash occurs myEnv->CallVoidMethod(myobj, method, NULL); } If instead I create a new jObject using env-NewObject, I can succuessfully call back into Java, but it is a new object and I dont want that. I need to get back to my original Java Object. Is it a matter of switching threads before I call back into Java? If so, how do I do so ?

    Read the article

  • Writing a synchronized thread-safety wrapper for NavigableMap

    - by polygenelubricants
    java.util.Collections currently provide the following utility methods for creating synchronized wrapper for various collection interfaces: synchronizedCollection(Collection<T> c) synchronizedList(List<T> list) synchronizedMap(Map<K,V> m) synchronizedSet(Set<T> s) synchronizedSortedMap(SortedMap<K,V> m) synchronizedSortedSet(SortedSet<T> s) Analogously, it also has 6 unmodifiedXXX overloads. The glaring omission here are the utility methods for NavigableMap<K,V>. It's true that it extends SortedMap, but so does SortedSet extends Set, and Set extends Collection, and Collections have dedicated utility methods for SortedSet and Set. Presumably NavigableMap is a useful abstraction, or else it wouldn't have been there in the first place, and yet there are no utility methods for it. So the questions are: Is there a specific reason why Collections doesn't provide utility methods for NavigableMap? How would you write your own synchronized wrapper for NavigableMap? Glancing at the source code for OpenJDK version of Collections.java seems to suggest that this is just a "mechanical" process Is it true that in general you can add synchronized thread-safetiness feature like this? If it's such a mechanical process, can it be automated? (Eclipse plug-in, etc) Is this code repetition necessary, or could it have been avoided by a different OOP design pattern?

    Read the article

  • EventAggregator, is it thread-safe?

    - by pfaz
    Is this thread-safe? The EventAggregator in Prism is a very simple class with only one method. I was surprised when I noticed that there was no lock around the null check and creation of a new type to add to the private _events collection. If two threads called GetEvent simultaneously for the same type (before it exists in _events) it looks like this would result in two entries in the collection. /// <summary> /// Gets the single instance of the event managed by this EventAggregator. Multiple calls to this method with the same <typeparamref name="TEventType"/> returns the same event instance. /// </summary> /// <typeparam name="TEventType">The type of event to get. This must inherit from <see cref="EventBase"/>.</typeparam> /// <returns>A singleton instance of an event object of type <typeparamref name="TEventType"/>.</returns> public TEventType GetEvent<TEventType>() where TEventType : EventBase { TEventType eventInstance = _events.FirstOrDefault(evt => evt.GetType() == typeof(TEventType)) as TEventType; if (eventInstance == null) { eventInstance = Activator.CreateInstance<TEventType>(); _events.Add(eventInstance); } return eventInstance; }

    Read the article

  • "Thread was being aborted" 0n large dataset

    - by Donaldinio
    I am trying to process 114,000 rows in a dataset (populated from an oracle database). I am hitting an error at around the 600 mark - "Thread was being aborted". All I am doing is reading the dataset, and I still hit the issue. Is this too much data for a dataset? It seems to load into the dataset ok though. I welcome any better ways to process this amount of data. rootTermsTable = entKw.GetRootKeywordsByCategory(catID); for (int k = 0; k < rootTermsTable.Rows.Count; k++) { string keywordID = rootTermsTable.Rows[k]["IK_DBKEY"].ToString(); ... } public DataTable GetKeywordsByCategory(string categoryID) { DbProviderFactory provider = DbProviderFactories.GetFactory(connectionProvider); DbConnection con = provider.CreateConnection(); con.ConnectionString = connectionString; DbCommand com = provider.CreateCommand(); com.Connection = con; com.CommandText = string.Format("Select * From icm_keyword WHERE (IK_IC_DBKEY = {0})",categoryID); com.CommandType = CommandType.Text; DataSet ds = new DataSet(); DbDataAdapter ad = provider.CreateDataAdapter(); ad.SelectCommand = com; con.Open(); ad.Fill(ds); con.Close(); DataTable dt = new DataTable(); dt = ds.Tables[0]; return dt; //return ds.Tables[0].DefaultView; }

    Read the article

  • StructureMap Configuration Per Thread/Request for the Full Dependency Chain

    - by Phil Sandler
    I've been using Structuremap for a few months now, and it has worked great on some smaller (green field) projects. Most of the configurations I have had to set up involve a single implementation per interface. In the cases where I needed to choose a specific implementation at runtime, I created a factory class that used ObjectFactory.GetNamedInstance<(). In the smaller projects, there were few enough of these cases where I was comfortable with the references to ObjectFactory. My understanding is that you want to limit these references as much as possible, and ideally only reference the ObjectFactory once. I am working to refactor a larger codebase to use IOC/StructureMap, and am finding that I may need many of these factory classes with ObjectFactory references to get what I need. Essentially, I am creating a "root service" with the ObjectFactory, so that everything in the dependency chain is managed by the container. The root service is created by name (i.e. "BuildCar", "BuildTruck"), and the services needed deeper in the dependency chain could also be constructed using the same name--so the "IAttachWheels" service could vary based on whether a car or truck is being built. Since the class that depends on IAttachWheels is the same in both configurations, I don't think I can use ConstructedBy in the registry to choose the implementation. Also, to be clear, the IAttachWheels implementations need to be managed by the container as well, because the dependency chain runs fairly deep. I looked briefly at Profiles as an option, but read (here on StackOverflow) that changing profiles essentially changes implementations for all threads. Is there a feature that is similar to profiles that is thread/request specific? Is the factory class that references ObjectFactory approach the right way to go? Any thoughts would be appreciated.

    Read the article

  • Thread-Safe lazy instantiating using MEF

    - by Xaqron
    // Member Variable private static readonly object _syncLock = new object(); // Now inside a static method foreach (var lazyObject in plugins) { if ((string)lazyObject.Metadata["key"] = "something") { lock (_syncLock) { // It seems the `IsValueCreated` is not up-to-date if (!lazyObject.IsValueCreated) lazyObject.value.DoSomething(); } return lazyObject.value; } } Here I need synchronized access per loop. There are many threads iterating this loop and based on the key they are looking for, a lazy instance is created and returned. lazyObject should not be created more that one time. Although Lazy class is for doing so and despite of the used lock, under high threading I have more than one instance created (I track this with a Interlocked.Increment on a volatile static int and log it somewhere). The problem is I don't have access to definition of Lazy and MEF defines how the Lazy class create objects. I should notice the CompositionContainer has a thread-safe option in constructor which is already used. My questions: 1) Why the lock doesn't work ? 2) Should I use an array of locks instead of one lock for performance improvement ?

    Read the article

  • How to multi-thread this?

    - by WilliamKF
    I wish to have two threads. The first thread1 occasionally calls the following pseudo function: void waitForThread2() { if (thread2 is not idle) { return; } notifyThread2IamReady(); while (thread2IsExclusive) { } } The second thread2 is forever in the following pseudo loop: for (;;) { Notify thread1 I am idle. while (!thread1IsReady()) { } Notify thread1 I am exclusive. Do some work while thread1 is blocked. Notify thread1 I am busy. Do some work in parallel with thread1. } What is the best way to write this such that both thread1 and thread2 are kept as busy as possible on a machine with multiple cores. I would like to avoid long delays between notification in one thread and detection by the other. I tried using pthread condition variables but found the delay between thread2 doing 'notify thread1 I am busy' and the loop in waitForThread2() on thear2IsExclusive() can be up to almost one second delay. I then tried using a volatile sig_atomic_t shared variable to control the same, but something is going wrong, so I must not be doing it correctly.

    Read the article

  • How to show and update popup in 1 thread

    - by user3713986
    I have 1 app. 2 Forms are MainFrm and PopupFrm, 1 thread to update some information to PopupFrm Now to update PopupFrm i use: In MainFrm.cs private PopupFrm mypop; MainFrm() { .... PopupFrm mypop= new PopupFrm(); mypop.Show(); } MyThread() { Process GetData();... mypop.Update(); ... } In PopupFrm.cs public void Update() { this.Invoke((MethodInvoker)delegate .... }); } Problem here that mypopup alway display when MainFrm display (Start application not when has data to update). So i change MainFrm.cs to : private PopupFrm mypop; private bool firstdisplay=false; MainFrm() { .... PopupFrm mypop= new PopupFrm(); //mypop.Show(); } MyThread() { Process GetData();... if(!firstdisplay) { mypop.Show(); firstdisplay=true; } mypop.Update(); ... } But it can not update Popup GUI. So how can i fix this issue ? Thanks all.

    Read the article

  • bluetooth BluetoothSocket.connect() thread. how to close this thread

    - by Hia
    I am trying to make an android app that makes connection with bluetooth device. It works fine but when I call BluetoothSocket.connect() and it is not able to connect to devise its blocking. The thread and does not throw any exception. So when I try to close application while connect() is running its not responding. How can I cancel it? Used BluetoothSocket.close() in ... but still its not working for me. protected void simpleComm(Integer port) { // The documents tell us to cancel the discovery process. try { Method m = mmDevice.getClass().getMethod("createRfcommSocket", new Class[] { int.class }); mmSocket = (BluetoothSocket) m.invoke(mmDevice, port); mmSocket.connect(); // <== blocks untill is not connected Log.d(TAG, " connection success==="); }catch(Exception e){ if (!abort) { connectionFailed(); // Close the socket try { mmSocket.close(); // Start the service over to restart listening mode BluetoothService.this.start(); } catch (IOException e2) { Log.e(TAG,"unable to close() socket during connection failure", e2); } } return; } } public void cancel() { try { abort = true; mmSocket.close(); } catch (IOException e) { Log.e(TAG, "close() of connect socket failed", e); } }

    Read the article

  • "Unable to initialize module" warning after update PHP on CentOS 5.4

    - by ohho
    After upgrading PHP from 5.1x to 5.2.10, there are a lot of warning when php -v: [root@localhost ~]# php -v PHP Warning: PHP Startup: fileinfo: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: mcrypt: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: mhash: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: mssql: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: readline: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: tidy: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP 5.2.10 (cli) (built: Nov 13 2009 11:24:03) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies How can I fix that? Thanks!

    Read the article

  • How to Get The Output Of a command terminated by a alarm() call.

    - by rockyurock
    Case 1 If I run below command i.e iperf in UL only, then i am able to capture the o/p in txt file @output = readpipe("iperf.exe -u -c 127.0.0.1 -p 5001 -b 3600k -t 10 -i 1"); open FILE, ">Misplay_DL.txt" or die $!; print FILE @output; close FILE; Case 2 When I run iperf in DL mode , as we know server will start listening in cont. mode like below even after getting data from client (Here i am using server and client on LAN) @output = system("iperf.exe -u -s -p 5001 -i 1"); on server side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -s -p 5001 ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1896] local 192.168.5.101 port 5001 connected with 192.168.5.101 port 4878 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [1896] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) command prompt does not appear , process is contd... on client side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -c 192.168.5.101 -p 5001 -b 3600k -t 2 -i 1 ------------------------------------------------------------ Client connecting to 192.168.5.101, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1880] local 192.168.5.101 port 4878 connected with 192.168.5.101 port 5001 [ ID] Interval Transfer Bandwidth [1880] 0.0- 1.0 sec 441 KBytes 3.61 Mbits/sec [1880] 1.0- 2.0 sec 439 KBytes 3.60 Mbits/sec [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec [1880] Server Report: [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) [1880] Sent 614 datagrams D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITY so with this as server is cont. listening and never terminates so can't take output of server side to a txt file as it is going to the next command itself to create a txt file so i adopted the alarm() function to terminate the server side (iperf.exe -u -s -p 5001) commands after it received all data from the client. could anybody suggest me the way.. Here is my code: #! /usr/bin/perl -w my $command = "iperf.exe -u -s -p 5001"; my @output; eval { local $SIG{ALRM} = sub { die "Timeout\n" }; alarm 20; #@output = `$command`; #my @output = readpipe("iperf.exe -u -s -p 5001"); #my @output = exec("iperf.exe -u -s -p 5001"); my @output = system("iperf.exe -u -s -p 5001"); alarm 0; }; if ($@) { warn "$command timed out.\n"; } else { print "$command successful. Output was:\n", @output; } open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; i know that with system command i cannot capture the o/p to a txt file but i tried with readpipe() and exec() calls also but in vain... could some one please take a look and let me know why the iperf.exe -u -s -p 5001 is not terminating even after the alarm call and to take the out put to a txt file

    Read the article

  • How To Edit XML File

    - by thebourneid
    I have a movie collection catalogue with local links to folders and files for an easy access. Recently I reorganaized my entire hard disk space and I need to update the links and I'm trying to do that automatically with perl. I can export the data in a xml file and import it again. I can extract the new filepaths with the use of File::Find but I'm stuck with two problems. I have no idea how to connect the $title from the new filepath with the corresponding $title from the xml file. I'm dealing with such files for the first time and I don't know how to proceed with the replacement process. Here is what I've done till now use strict; use warnings; use File::Basename; use File::Find; use File::Spec; use XML::Simple; use Data::Dumper; my $dir_target = 'somepath'; find(\&a, $dir_target); sub a { /\.iso$/ or return; my $fn = $File::Find::name; $fn =~ s/\//\\/g; $fn =~ /(.*\\)(.*)/; my $path = $1; my $filename = $2; my $title = (File::Spec->splitdir($fn))[2]; $title =~ s/(.*?)\s\(\d+\)$/$1/; $title =~ s/~/:/; $title =~ s/`/?/; my $link_local = '<link><description>Folder</description><url>'.$path.'</url><urltype>Movie</urltype></link><link><description>'.$filename.'</description><url>'.$fn.'</url><urltype>Movie</urltype></link>' unless $title eq ''; my $txt = 'somepath/log.txt'; my $xml_in = XMLin('somepath/test.xml', ForceArray => 1, KeepRoot => 1); my $xml_out = XMLout($xml_in, OutputFile => 'somepath/test_out.xml', KeepRoot=>1); open F, ">>", $txt; print F $link_local."\n\n"; close F; } And here is a snippet of the data I need to edit. If found imdb and dvdempire link - do not touch. if found local links replace, otherwise insert. I'm willing to complete the code myself but need some directions how to proceed further. Thanks. <title>$title</title> ....... <links> <link> <description>IMDB</description> <url>http://www.imdb.com/title/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>DVD Empire</description> <url>http://www.dvdempire.com/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>Folder</description> <url>OLD_FOLDERPATH</url> <urltype>Movie</urltype> </link> <link> <description>OLD_FILENAME</description> <url>OLD_FILENAMEPATH</url> <urltype>Movie</urltype> </link> </links>

    Read the article

  • How to make Net::Msmgr run?

    - by codeholic
    There's Net::Msmgr module on CPAN. It's written clean and the code looks trustworthy at the first glance. However this module seems to be beta and there is little documentation and no tests :-/ Has anyone used this module in production? I haven't managed to make it run by now, because it requires all event loop processing to be done in the application and as I've already said there is little documentation and no working examples to study. That's where I've gone so far: #!/usr/bin/perl use strict; use warnings; use Event; use Net::Msmgr::Object; use Net::Msmgr::Session; use Net::Msmgr::User; use constant DEBUG => 511; use constant EVENT_TIMEOUT => 5; # seconds my ($username, $password) = qw/[email protected] my.password/; my $buddy = '[email protected]'; my $user = Net::Msmgr::User->new(user => $username, password => $password); my $session = Net::Msmgr::Session->new; $session->debug(DEBUG); $session->login_handler(\&login_handler); $session->user($user); my $conv; sub login_handler { my $self = shift; print "LOGIN\n"; $self->ui_state_nln; $conv = $session->ui_new_conversation; $conv->invite($buddy); } our %watcher; sub ConnectHandler { my ($connection) = @_; warn "CONNECT\n"; my $socket = $connection->socket; $watcher{$connection} = Event->io(fd => $socket, cb => [ $connection, '_recv_message' ], poll => 're', desc => 'recv_watcher', repeat => 1); } sub DisconnectHandler { my $connection = shift; print "DISCONNECT\n"; $watcher{$connection}->cancel; } $session->connect_handler(\&ConnectHandler); $session->disconnect_handler(\&DisconnectHandler); $session->Login; Event::loop(); That's what it outputs: Dispatch Server connecting to: messenger.hotmail.com:1863 Dispatch Server connected CONNECT Dispatch Server >>>VER 1 MSNP2 CVR0 --> VER 1 MSNP2 CVR0 Dispatch Server >>>USR 2 MD5 I [email protected] --> USR 2 MD5 I [email protected] Dispatch Server <<<VER 1 CVR0 <-- VER 1 CVR0 And that's all, here it hangs. The handler on login is not being triggered. What am I doing wrong?

    Read the article

  • FTP into a server using specific usernames

    - by user1854765
    I am trying to ftp into a server, once I'm there, I want to get a file, then put it back after sleeping for 5 minutes. I have that part correct, but i added to the code, two variables that will be inputted when the code is executed. The user will input the username they want to connect with. I am having trouble connecting though. When I input the username t14pb, it still goes to the the first if statement, as if I said t14pmds. here is the code: #!/usr/bin/perl use Net::FTP; $host = "fd00p02"; $username = "$ARGV[0]"; $ftpdir = "/"; $file = "$ARGV[1]"; print "$username\n"; print "$file\n"; if ($username == t14pmds) { $password = "test1"; $ftp = Net::FTP->new($host) or die "Error connecting to $host: $!"; $ftp->login($username, $password) or die "Login failed: $!"; $ftp->cwd($ftpdir) or die "Can't go to $ftpdir: $!"; $ftp->get($file) or die "Can't get $file: $!"; sleep 5; $ftp->put($file) or die "Can't put $file: $!"; $ftp->quit or die "Error closing ftp connection: $!"; } if ($username == t14pb) { $password = "test2"; $ftp = Net::FTP->new($host) or die "Error connecting to $host: $!"; $ftp->login($username, $password) or die "Login failed: $!"; $ftp->cwd($ftpdir) or die "Can't go to $ftpdir: $!"; $ftp->get($file) or die "Can't get $file: $!"; sleep 5; $ftp->put($file) or die "Can't put $file: $!"; $ftp->quit or die "Error closing ftp connection: $!"; } if ($username == t14pmds_out) { $password = "test3"; $ftp = Net::FTP->new($host) or die "Error connecting to $host: $!"; $ftp->login($username, $password) or die "Login failed: $!"; $ftp->cwd($ftpdir) or die "Can't go to $ftpdir: $!"; $ftp->get($file) or die "Can't get $file: $!"; sleep 5; $ftp->put($file) or die "Can't put $file: $!"; $ftp->quit or die "Error closing ftp connection: $!"; } if ($username == t14fiserv) { $password = "test4"; $ftp = Net::FTP->new($host) or die "Error connecting to $host: $!"; $ftp->login($username, $password) or die "Login failed: $!"; $ftp->cwd($ftpdir) or die "Can't go to $ftpdir: $!"; $ftp->get($file) or die "Can't get $file: $!"; sleep 5; $ftp->put($file) or die "Can't put $file: $!"; $ftp->quit or die "Error closing ftp connection: $!"; }

    Read the article

  • How to make Net::Msmnr run?

    - by codeholic
    There's Net::Msmgr module on CPAN. It's written clean and the code looks trustworthy at the first glance. However this module seems to be beta and there is little documentation and no tests :-/ Has anyone used this module in production? I haven't managed to make it run by now, because it requires all event loop processing to be done in the application and as I've already said there is little documentation and no working examples to study. That's where I've gone so far: #!/usr/bin/perl use strict; use warnings; use Event; use Net::Msmgr::Object; use Net::Msmgr::Session; use Net::Msmgr::User; use constant DEBUG => 511; use constant EVENT_TIMEOUT => 5; # seconds my ($username, $password) = qw/[email protected] my.password/; my $buddy = '[email protected]'; my $user = Net::Msmgr::User->new(user => $username, password => $password); my $session = Net::Msmgr::Session->new; $session->debug(DEBUG); $session->login_handler(\&login_handler); $session->user($user); my $conv; sub login_handler { my $self = shift; print "LOGIN\n"; $self->ui_state_nln; $conv = $session->ui_new_conversation; $conv->invite($buddy); } our %watcher; sub ConnectHandler { my ($connection) = @_; warn "CONNECT\n"; my $socket = $connection->socket; $watcher{$connection} = Event->io(fd => $socket, cb => [ $connection, '_recv_message' ], poll => 're', desc => 'recv_watcher', repeat => 1); } sub DisconnectHandler { my $connection = shift; print "DISCONNECT\n"; $watcher{$connection}->cancel; } $session->connect_handler(\&ConnectHandler); $session->disconnect_handler(\&DisconnectHandler); $session->Login; Event::loop(); That's what it outputs: Dispatch Server connecting to: messenger.hotmail.com:1863 Dispatch Server connected CONNECT Dispatch Server >>>VER 1 MSNP2 CVR0 --> VER 1 MSNP2 CVR0 Dispatch Server >>>USR 2 MD5 I [email protected] --> USR 2 MD5 I [email protected] Dispatch Server <<<VER 1 CVR0 <-- VER 1 CVR0 And that's all, here it hangs. The handler on login is not being triggered. What am I doing wrong?

    Read the article

  • Create File Speedily From Individual Column

    - by neversaint
    I have a data that looks like this: -1 1:-0.394668 2:-0.794872 3:-1 4:-0.871341 5:0.9365 6:0.75597 1 1:-0.463641 2:-0.897436 3:-1 4:-0.871341 5:0.44378 6:0.121824 1 1:-0.469432 2:-0.897436 3:-1 4:-0.871341 5:0.32668 6:0.302529 -1 1:-0.241547 2:-0.538462 3:-1 4:-0.871341 5:0.9994 6:0.987166 1 1:-0.757233 2:-0.948718 3:-1 4:-0.871341 5:-0.33904 6:0.915401 1 1:-0.167147 2:-0.589744 3:-1 4:-0.871341 5:0.95078 6:0.991566 The first column is class, and next 6 columns are features. I want to create 6 files for individual features. For example feat1_file.txt will contain -1 1:-0.394668 1 1:-0.463641 ... 1 1:-0.757233 1 1:-0.167147 feat2_file.txt will contain -1 2:-0.794872 ... 1 2:-0.589744 and so on. I have a Perl code that does this but it is horribly slow. Is there a way to do it faster? Typically the input files will contain 100K lines. use strict; use Data::Dumper; use Carp; my $input = $ARGV[0] || "myinput.txt"; my $INFILE_file_name = $input; # input file name open ( INFILE, '<', $INFILE_file_name ) or croak "$0 : failed to open input file $INFILE_file_name : $!\n"; my $out1 = $input."_feat_1.txt"; my $out2 = $input."_feat_2.txt"; my $out3 = $input."_feat_3.txt"; my $out4 = $input."_feat_4.txt"; my $out5 = $input."_feat_5.txt"; my $out6 = $input."_feat_6.txt"; unlink($out1); unlink($out2); unlink($out3); unlink($out4); unlink($out5); unlink($out6); print "$out1\n"; while ( <INFILE> ) { chomp; my @els = split(/\s+/,$_); my $lbl = $els[0]; my $OUTFILE1_file_name = $out1; # output file name open ( OUTFILE1, '>>', $OUTFILE1_file_name ) or croak "$0 : failed to open output file $OUTFILE1_file_name : $!\n"; print OUTFILE1 "$lbl $els[1]\n"; close ( OUTFILE1 ); # close output file my $OUTFILE2_file_name = $out2; # output file name open ( OUTFILE2, '>>', $OUTFILE2_file_name ) or croak "$0 : failed to open output file $OUTFILE2_file_name : $!\n"; print OUTFILE2 "$lbl $els[2]\n"; close ( OUTFILE2 ); # close output file # Etc.. until OUTFILE 6 } close (INFILE);

    Read the article

  • Opening a file with a variable as name and checking for undefined values

    - by Harm De Weirdt
    I'm having some problems writing data into a file using Perl. sub startNewOrder{ my $name = makeUniqueFileName(); open (ORDER, ">$name.txt") or die "can't open file: $!\n"; format ORDER_TOP = PRODUCT<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<CODE<<<<<<<<AANTAL<<<<EENHEIDSPRIJS<<<<<<TOTAAL<<<<<<< . format ORDER = @<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< @<<<<<<<< @<<<< @<<<<<< @<<<<< $title, $code, $amount, $price, $total . close (ORDER); } This is the sub I use to make the file. (I translated most of it.) The makeUniqueFileName method makes a fileName based upon current time("minuteshoursdayOrder"). The problem now is that I have to write to this file in another sub. sub addToOrder{ print "give productcode:"; $code = <STDIN>; chop $code; print "Give amount:"; $amount = <STDIN>; chop $amount; if($inventory{$code} eq undef){ #Does the product exist? print "This product does not exist"; }elsif($inventory{$code}[2] < $amount && !defined($inventaris{$code}[2]) ){ #Is there enough in the inventory? print "There is not enough in stock" }else{ $inventory{$code}[2] -= $amount; #write in order file open (ORDER ">>$naam.txt") or die "can't open file: $!\n"; $title = $inventory{$code}[0]; $code = $code; $amount = $inventory{$code}[2]; $price = $inventory{$code}[1]; $total = $inventory{$code}[1]; write; close(ORDER); } %inventory is a hashtable that has the productcode as key and an array with the title, price and amount as value. There are two problems here: when I enter an invalid product number, I still have to enter an amount even while my code says it should print the error directly after checking if there is a product with the given code. The second problem is that the writing doesn't seem to work. It always give's a "No such file or directory" error. Is there a way to open the ORDER file I made in the first sub without having to make $name not local? Or just a way to write in this file? I really don't know how to start here. I can't really find much info on writing a file that has been closed before, and in a different sub. Any help is appreciated, Harm

    Read the article

  • Opening a file with a variable as name & checking for undefined values

    - by Harm De Weirdt
    Hello everyone. I'm having some problems writing data into a file using perl. sub startNewOrder{ my $name = makeUniqueFileName(); open (ORDER, ">$name.txt") or die "can't open file: $!\n"; format ORDER_TOP = PRODUCT<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<CODE<<<<<<<<AANTAL<<<<EENHEIDSPRIJS<<<<<<TOTAAL<<<<<<< . format ORDER = @<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< @<<<<<<<< @<<<< @<<<<<< @<<<<< $title, $code, $amount, $price, $total . close (ORDER); } This is the sub I use to make the file. (I translated most of it) The makeUniqueFileName method makes a fileName based upon current time("minuteshoursdayOrder"). The problem now is that I have to write in this file in another sub. sub addToOrder{ print "give productcode:"; $code = <STDIN>; chop $code; print "Give amount:"; $amount = <STDIN>; chop $amount; if($inventory{$code} eq undef){ #Does the product exist? print "This product does not exist"; }elsif($inventory{$code}[2] < $amount && !defined($inventaris{$code}[2]) ){ #Is there enough in the inventory? print "There is not enough in stock" }else{ $inventory{$code}[2] -= $amount; #write in order file open (ORDER ">>$naam.txt") or die "can't open file: $!\n"; $title = $inventory{$code}[0]; $code = $code; $amount = $inventory{$code}[2]; $price = $inventory{$code}[1]; $total = $inventory{$code}[1]; write; close(ORDER); } %inventory is a hashtable that has the productcode as key and an array with the title, price and amount as value. There are two problems here: when I enter an invalid product number, I still have to enter an amount even while my code says it should print the error directly after checking if there is a product with the given code. The second problem is that the writing doesn't seem to work. It always give's a "No such file or directory" error. Is there a way to open the ORDER file i made in the first sub without having to make $name not local? Or just a way to write in this file? I really don't know how to start here. I can't really find much info on writing a file that has been closed before, and in a different sub.. Any help is appreciated, Harm

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • How do you get XML::Pastor to set xsi:type for programmatically generated elements?

    - by Derrick
    I'm learning how to use Perl as an automation test framework tool for a Java web service and running into trouble generating xml requests from the Pastor generated modules. The problem is that when including a type that extends from the required type for an element, the xsi:type is not included in the generated xml string. Say, for example, I want to generate the following xml request from the modules that XML::Pastor generated from my xsd: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <PromptAnswersRequest xmlns="http://mycompany.com/api"> <Uri>/some/url</Uri> <User ref="1"/> <PromptAnswers> <PromptAnswer xsi:type="textPromptAnswer" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <Prompt ref="2"/> <Children> <PromptAnswer xsi:type="choicePromptAnswer"> <Prompt ref="1"/> <Choice ref="2"/> </PromptAnswer> </Children> <Value>totally</Value> </PromptAnswer> </PromptAnswers> </PromptAnswersRequest> What I'm getting currently is this: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <PromptAnswersRequest xmlns="http://mycompany.com/api"> <Uri>/some/url</Uri> <User ref="1"/> <PromptAnswers> <PromptAnswer> <Prompt ref="2"/> <Children> <PromptAnswer> <Prompt ref="1"/> <Choice ref="2"/> </PromptAnswer> </Children> <Value>totally</Value> </PromptAnswer> </PromptAnswers> </PromptAnswersRequest> Here are some relavent snippets from the xsd: <xs:complexType name="request"> <xs:sequence> <xs:element name="Uri" type="xs:anyURI"/> </xs:sequence> </xs:complexType> <xs:complexType name="promptAnswersRequest"> <xs:complexContent> <xs:extension base="api:request"> <xs:sequence> <xs:element name="User" type="api:ref"/> <xs:element name="PromptAnswers" type="api:promptAnswerList"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="promptAnswerList"> <xs:sequence> <xs:element name="PromptAnswer" type="api:promptAnswer" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <xs:complexType name="promptAnswer" abstract="true"> <xs:sequence> <xs:element name="Prompt" type="api:ref"/> <xs:element name="Children" type="api:promptAnswerList" minOccurs="0"/> </xs:sequence> </xs:complexType> <xs:complexType name="textPromptAnswer"> <xs:complexContent> <xs:extension base="promptAnswer"> <xs:sequence> <xs:element name="Value" type="api:nonEmptyString" minOccurs="0"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> And here are relavent parts of the script: my $promptAnswerList = new My::API::Type::promptAnswerList; my @promptAnswers; my $promptAnswerList2 = new My::API::Type::promptAnswerList; my @textPromptAnswerChildren; my $textPromptAnswer = new My::API::Type::textPromptAnswer; my $textPromptAnswerRef = new My::API::Type::ref; $textPromptAnswerRef->ref('2'); $textPromptAnswer->Prompt($textPromptAnswerRef); my $choicePromptAnswer = new My::API::Type::choicePromptAnswer; my $choicePromptAnswerPromptRef = new My::API::Type::ref; my $choicePromptAnswerChoiceRef = new My::API::Type::ref; $choicePromptAnswerPromptRef->ref('1'); $choicePromptAnswerChoiceRef->ref('2'); $choicePromptAnswer->Prompt($choicePromptAnswerPromptRef); $choicePromptAnswer->Choice($choicePromptAnswerChoiceRef); push(@textPromptAnswerChildren, $choicePromptAnswer); $promptAnswerList2->PromptAnswer(@textPromptAnswerChildren); $textPromptAnswer->Children($promptAnswerList2); $textPromptAnswer->Value('totally'); push(@promptAnswers, $pulseTextPromptAnswer); push(@promptAnswers, $textPromptAnswer); I haven't seen this addressed anywhere in the documentation for the XML::Pastor modules, so if anyone can point me at a good reference for its use it would be greatly appreciated. Also, I'm only using XML::Pastor because I don't know of any other modules that can do this, so if any of you know of something either easier to use, or more well maintained, please let me know about that too!

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >