Search Results

Search found 4238 results on 170 pages for 'lost soul'.

Page 136/170 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Javascript onunload form submit isn't submitting data

    - by Kevin
    I currently have form that checks if a user has unsubmitted changes when they leave the page with a function called through the onunload event. Here's the function: function saveOnExit() { var answer = confirm("You are about to leave the page. All unsaved work will be lost. Would you like to save now?"); if (answer) { document.main_form.submit(); } } And here's the form: <body onunload="saveOnExit()"> <form name="main_form" id="main_form" method="post" action="submit.php" onsubmit="saveScroll()"> <textarea name="comments"></textarea> <input type="submit" name="submit2" value="Submit!"/> </form> I'm not sure what I'm doing wrong here. The data gets submitted and saved in my database if I just press the submit button for the form. However, trying to submit the form through the onunload event doesn't result in anything being stored, from what I can tell. I've tried adding onclick alerts to the submitt button and onsubmit alerts to the form elements and I can verify that the submit button is being triggered and that the form does get submitted. However, nothing gets passed stored in the database. Any ideas as to what I'm doing wrong? Thanks.

    Read the article

  • How to isolate a single element from a scraped web page in R

    - by PaulHurleyuk
    Hello, I'm trying to do soemone a favour, and it's a tad outside my comfort zone, so I'm stuck. I want to use R to scrape this page (http://www.fifa.com/worldcup/archive/germany2006/results/matches/match=97410001/report.html ) and others, to get the goal scorers and times. So far, this is what I've got require(RCurl) require(XML) theURL <-"http://www.fifa.com/worldcup/archive/germany2006/results/matches/match=97410001/report.html" webpage <- getURL(theURL, header=FALSE, verbose=TRUE) webpagecont <- readLines(tc <- textConnection(webpage)); close(tc) pagetree <- htmlTreeParse(webpagecont, error=function(...){}, useInternalNodes = TRUE) and the pagetree object now contains a pointer to my parsed html (I think). The part I want is <div class="cont")<ul> <div class="bold medium">Goals scored</div> <li>Philipp LAHM (GER) 6', </li> <li>Paulo WANCHOPE (CRC) 12', </li> <li>Miroslav KLOSE (GER) 17', </li> <li>Miroslav KLOSE (GER) 61', </li> <li>Paulo WANCHOPE (CRC) 73', </li> <li>Torsten FRINGS (GER) 87'</li> </ul></div> but I'm now lost as to how to isolate them, and frankly xpathSApply, xpathApply confuse the beejeebies out of me !. So, does anyone know how to fomulate a command to suck out the element conmtaiend within the tags ? Thanks Paul.

    Read the article

  • How to prevent Hibernate from nullifying relationship column during entity removal

    - by Grzegorz
    I have two entities, A and B. I need to easily retrieve entities A, joined with entities B on the condition of equal values of some column (some column from A equal to some column in B). Those columns are not primary or foreign keys, they contain same business data. I just need to have access from each instance of A to the collection of B's with the same value of this column. So I model it like this: class A { @OneToMany @JoinColumn(name="column_in_B", referencedColumnName="column_in_A") Collection<B> bs; This way, I can run queries like "select A join fetch a.bs b where b...." (Actually, the real relationship here is many-to-many. But when I use @ManyToMany, Hibernate forces me to use join table, which doesnt exist here. So I have to use @OneToMany as workaround). So far so good. The main problem is: whenever I delete an instance of A, hibernate calls "Update B set column_in_B = null", becuase it thinks the column_in_B is foreign key pointing at primary key in A (and because row in A is deleted, it tries to clean the foreign key in B). BUT the column_in_B IS NOT a foreign key, and can't be modified, because it causes data lost (and this column is NOT NULL anyway in my case, causing data integerity exception to be thrown). Plese help me with this. How to model such relationships with Hibernate? (I would call it "virtual relationships", or "secondary relationships" or so: as they are not based on foreign keys, they are just some shortcuts which allows for retrieving related objects and quering for them with HQL)

    Read the article

  • Adding changes from one Mercurial repository to another

    - by Patrik Hägne
    When changing the VCS for my project FakeItEasy from SVN to Mercurial on Google Code I was a bit too eager (I'm funny like that). What I did was just checking the latest version out of SVN and then commiting that checkout as the first revision of the new Mercurial repo. This obviously has the effect that all history is lost. Later when getting a bit better acustomed to Mercurial I realized that there is such a thing as a "convert extension" that allows you to convert a SVN repo into a Mercurial repo. Now what I want to do is to convert the old SVN repo and then have all change sets from the currently existing Mercurial repo imported into this converted repo except the very first commit to Mercurial. I've converted the SVN repo to a local Mercurial repo but now is when I'm stuck. I thought I'd be able to use the convert extension to bring the current Mercurial repository into the converted one and having a splice map remove the first commit but I can not seem to get this to work. I've also tried to just use convert without splice map to get all change sets from the current Mercurial repo into the converted one and the rebase the second version in the current to the last commit from the old SVN repository but I can't get that to work either. To make this clearer lets say I have these two repositories: A: revA1-revA2 B: revB1-revB2-revB3 (Where revB1 is actually a copy of revA2) Now I want to combine these two into the new repository containing this: C: revA1-revA2-revB2-revB3

    Read the article

  • MySQL BinLog Statement Retrieval

    - by Jonathon
    I have seven 1G MySQL binlog files that I have to use to retrieve some "lost" information. I only need to get certain INSERT statements from the log (ex. where the statement starts with "INSERT INTO table SET field1="). If I just run a mysqlbinlog (even if per database and with using --short-form), I get a text file that is several hundred megabytes, which makes it almost impossible to then parse with any other program. Is there a way to just retrieve certain sql statements from the log? I don't need any of the ancillary information (timestamps, autoincrement #s, etc.). I just need a list of sql statements that match a certain string. Ideally, I would like to have a text file that just lists those sql statements, such as: INSERT INTO table SET field1='a'; INSERT INTO table SET field1='tommy'; INSERT INTO table SET field1='2'; I could get that by running mysqlbinlog to a text file and then parsing the results based upon a string, but the text file is way too big. It just times out any script I run and even makes it impossible to open in a text editor. Thanks for your help in advance.

    Read the article

  • Help needed in grokking password hashes and salts

    - by javafueled
    I've read a number of SO questions on this topic, but grokking the applied practice of storing a salted hash of a password eludes me. Let's start with some ground rules: a password, "foobar12" (we are not discussing the strength of the password). a language, Java 1.6 for this discussion a database, postgreSQL, MySQL, SQL Server, Oracle Several options are available to storing the password, but I want to think about one (1): Store the password hashed with random salt in the DB, one column Found on SO and elsewhere is the automatic fail of plaintext, MD5/SHA1, and dual-columns. The latter have pros and cons MD5/SHA1 is simple. MessageDigest in Java provides MD5, SHA1 (through SHA512 in modern implementations, certainly 1.6). Additionally, most RDBMSs listed provide methods for MD5 encryption functions on inserts, updates, etc. The problems become evident once one groks "rainbow tables" and MD5 collisions (and I've grokked these concepts). Dual-column solutions rest on the idea that the salt does not need to be secret (grok it). However, a second column introduces a complexity that might not be a luxury if you have a legacy system with one (1) column for the password and the cost of updating the table and the code could be too high. But it is storing the password hashed with a random salt in single DB column that I need to understand better, with practical application. I like this solution for a couple of reasons: a salt is expected and considers legacy boundaries. Here's where I get lost: if the salt is random and hashed with the password, how can the system ever match the password? I have theory on this, and as I type I might be grokking the concept: Given a random salt of 128 bytes and a password of 8 bytes ('foobar12'), it could be programmatically possible to remove the part of the hash that was the salt, by hashing a random 128 byte salt and getting the substring of the original hash that is the hashed password. Then re hashing to match using the hash algorithm...??? So... any takers on helping. :) Am I close?

    Read the article

  • Missing elements of collection

    - by Neir0
    I have a collection ObservableCollection<string> outoverList And i have a function which call collection outoverList.Add("out:"+element.tagName); Function call collection a few times, but sometimes collection lost elements. We call a function - function adds element - collection has 9 elements(for example) - in the next function calling collection has only 8 elements. One elements be missing. Here Resharpers Find usages log: Search target FindElementHandler.outoverList:ObservableCollection<string> Found 3 usages in solution <FindElementExperiments> (3 items) FindElementHandler.cs (3 items) (50,13) outoverList = new ObservableCollection<string>(); (94,13) outoverList.Add("out:"+element.tagName); (118,13) outoverList.Add("over:" + element.tagName); As you can see i just add elements to collection everywhere. i havent remove elements code. Maybe i did something wrong you can look at screen capture: http://www.youtube.com/watch?v=Ei6dQnHCMIc I am newbie and often encounter with various problems but this bug looks mystic for me. P/S/ Sorry for english

    Read the article

  • PyQt Drag and Drop - Nothing happens

    - by Umang
    Hi, I'm trying to get drop a file onto a Window (I've tried the same thing with a QListWidget without success there too) test.py: #! /usr/bin/python # Test from PyQt4 import QtCore, QtGui import sys from qt_test import Ui_MainWindow class MyForm(QtGui.QMainWindow, Ui_MainWindow): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.setupUi(self) self.__class__.dragEnterEvent = self.DragEnterEvent self.__class__.dragMoveEvent = self.DragEnterEvent self.__class__.dropEvent = self.drop self.setAcceptDrops(True) print "Initialized" self.show() def DragEnterEvent(self, event): event.accept() def drop(self, event): link=event.mimeData().text() print link def main(): app = QtGui.QApplication(sys.argv) mw = MyForm() sys.exit(app.exec_()) if __name__== "__main__": main() And here's qt_test.py # -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'untitled.ui' # # Created: Thu May 20 12:23:19 2010 # by: PyQt4 UI code generator 4.6 # # WARNING! All changes made in this file will be lost! from PyQt4 import QtCore, QtGui class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(800, 600) MainWindow.setAcceptDrops(True) self.centralwidget = QtGui.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") MainWindow.setCentralWidget(self.centralwidget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): MainWindow.setWindowTitle(QtGui.QApplication.translate("MainWindow", "MainWindow", None, QtGui.QApplication.UnicodeUTF8)) I've read this email and I've followed everything said there. I still don't get any output except "Initialized" and the drag doesn't seem to get accepted (both for files from a file manager and plain text dragged from a text editor). Do you know what I'm doing wrong? Thanks!

    Read the article

  • jQuery date picker not persistant after AJAX

    - by ILMV
    So I'm using the jQuery date picker, and it works well. I am using AJAX to go and get some content, obviously when this new content is applied the bind is lost, I learnt about this last week and discovered about the .live() method. But how do I apply that to my date picker? Because this isn't an event therefore .live() won't be able to help... right? This is the code I'm using to bind the date picker to my input: $(".datefield").datepicker({showAnim:'fadeIn',dateFormat:'dd/mm/yy',changeMonth:true,changeYear:true}); I do not want to call this metho everytime my AJAX fires, as I want to keep that as generic as possible. Cheers :-) EDIT As @nick requested, below is my wrapper function got the ajax() method: var ajax_count = 0; function getElementContents(options) { if(options.type===null) { options.type="GET"; } if(options.data===null) { options.data={}; } if(options.url===null) { options.url='/'; } if(options.cache===null) { options.cace=false; } if(options.highlight===null || options.highlight===true) { options.highlight=true; } else { options.highlight=false; } $.ajax({ type: options.type, url: options.url, data: options.data, beforeSend: function() { /* if this is the first ajax call, block the screen */ if(++ajax_count==1) { $.blockUI({message:'Loading data, please wait'}); } }, success: function(responseText) { /* we want to perform different methods of assignment depending on the element type */ if($(options.target).is("input")) { $(options.target).val(responseText); } else { $(options.target).html(responseText); } /* fire change, fire highlight effect... only id highlight==true */ if(options.highlight===true) { $(options.target).trigger("change").effect("highlight",{},2000); } }, complete: function () { /* if all ajax requests have completed, unblock screen */ if(--ajax_count===0) { $.unblockUI(); } }, cache: options.cache, dataType: "html" }); } What about this solution, I have a rules.js which include all my initial bindings with the elements, if I were to put these in a function, then call that function on the success callback of the ajax method, that way I wouldn't be repeating code... Hmmm, thoughts please :D

    Read the article

  • Troubleshooting failover cluster problem in W2K8 / SQL05

    - by paulland
    I have an active/passive W2K8 (64) cluster pair, running SQL05 Standard. Shared storage is on a HP EVA SAN (FC). I recently expanded the filesystem on the active node for a database, adding a drive designation. The shared storage drives are designated as F:, I:, J:, L: and X:, with SQL filesystems on the first 4 and X: used for a backup destination. Last night, as part of a validation process (the passive node had been offline for maintenance), I moved the SQL instance to the other cluster node. The database in question immediately moved to Suspect status. Review of the system logs showed that the database would not load because the file "K:\SQLDATA\whatever.ndf" could not be found. (Note that we do not have a K: drive designation.) A review of the J: storage drive showed zero contents -- nothing -- this is where "whatever.ndf" should have been. Hmm, I thought. Problem with the server. I'll just move SQL back to the other server and figure out what's wrong.. Still no database. Suspect. Uh-oh. "Whatever.ndf" had gone into the bit bucket. I finally decided to just restore from the backup (which had been taken immediately before the validation test), so nothing was lost but a few hours of sleep. The question: (1) Why did the passive node think the whatever.ndf files were supposed to go to drive "K:", when this drive didn't exist as a resource on the active node? (2) How can I get the cluster nodes "re-syncd" so that failover can be accomplished? I don't know that there wasn't a "K:" drive as a cluster resource at some time in the past, but I do know that this drive did not exist on the original cluster at the time of resource move.

    Read the article

  • Webstart omits cookie, resulting in EOFException in ObjectInputStream when accessing Servlets?!

    - by Houtman
    Hi, My app. is started from both the commandline and by using an JNLP file. Im running java version 1.6.0_14 First i had the problem that i created the Buffered input and output streams in incorrect order. Found the solution here at StackOverflow . So starting from the commandline works fine now. But when starting the app using Webstart, it ends here java.io.EOFException at java.io.ObjectInputStream$PeekInputStream.readFully(Unknown Source) at java.io.ObjectInputStream$BlockDataInputStream.readShort(Unknown Source) at java.io.ObjectInputStream.readStreamHeader(Unknown Source) at java.io.ObjectInputStream.<init>(Unknown Source) at <..>remoting.thinclient.RemoteSocketChannel.<init>(RemoteSocketChannel.java:76) I found some posts regarding similar problems; at ibm.com - identifies cookies problem at bugs.sun.com - identifies problem as solved in 6u10(b12)? The first suggests that there is a problem in Webstart with cookies. It doesn't seem to be acknowledged as a proper java bug though.. Still i am a bit lost in the solution provided regarding the cookies.(ibm link) Can anyone expand on the cookie solution? I can't find information on how the cookie is generated in the first place. Many thanks.

    Read the article

  • Define a swig interface file for generation of wrapper to every type from some header file

    - by Dmitriy Matveev
    Hi! We're using some C library in our Java project. Several years ago some other developer which has retired few years ago (as always) has created all the wrappers for us. The wrappers were generated by the swig, but the interface file is lost now. The basic idea of library and the wrappers for it is following: There only one function which returns pointer to some complex object. And there are wrapper for that function. The complex object is a tree-like structure with dozens of node kinds and types (C structures) used to represent them. There are hundreds of wrappers for every field of every type and we're trying to use them all. The library was updated some time ago and now there are some new data we unaware of which yet, but would like to use. This data is contained in some of the objects indirectly contained or referenced from the object created by the function we call (Some new fields and types were added). I know that I shouldn't make any changes to the wrappers by hand and should rather modify the interface, but as I already wrote it's missing. For now I only want to generate wrappers some few types which are added/changed and them to our old wrappers, but later I want to start creation of interface file which will define "what and how should be wrapped". All the definitions necessary for us are defined in single header file. Is it possible to tell swig to generate wrappers for every type in this header? If so, how can I write such interface file?

    Read the article

  • use exec for dsadd

    - by Daryl Gill
    I'm Programming on a Windows Server 2008 and I wish to have a WebUI to interact with the domains active directory. One of my main problems is this that i'm using dsadd from a HTML form but this is no succeeding. I know my command is correct, I have tested it out on the Servers Command line My Code is As Below: if (isset($_POST['Submit'])) { $DesiredUsername = $_POST['DesiredUsername']; $DesiredPassword = $_POST['DesiredPassword']; $DU = "{$DesiredUsername}"; // Desired Username $OU = "PHPCreatedUsers"; // Domain OU $DC1 = "slayerserv"; // Domain Part one $DC2 = "local"; // Domain Part Two $PWD = "{$DesiredPassword}"; // Password $ExecScript = 'dsadd user cn=$DesiredUsername,cn=PHPCreatedUsers,dc=slayerserv,dc=local -disabled no -pwd $DesiredPassword -mustchpwd yes'; exec($ExecScript, $output); mysql_query("INSERT INTO addedusers (`ID`, `DU`, `OU`, `DC1`, `DC2, `PWD`) VALUES ('', '$DU', '$OU', '$DC1', '$DC2', '$PWD')"); echo "<br><br>"; print_r($output); # echo "User: $DesiredUsername Has been Created"; } When I print_r($output); it Returns a blank array: Array ( ) Could anyone provide me with a solution or point me in the right direction? ++++ Below is a working example of my usage of exec $Script = 'ping 127.0.0.1 -n 1'; exec($Script, $Output); print_r($Output); print_r($Output); Gives: Array ( [0] = [1] = Pinging 127.0.0.1 with 32 bytes of data: [2] = Reply from 127.0.0.1: bytes=32 time<1ms TTL=128 [3] = [4] = Ping statistics for 127.0.0.1: [5] = Packets: Sent = 1, Received = 1, Lost = 0 (0% loss), [6] = Approximate round trip times in milli-seconds: [7] = Minimum = 0ms, Maximum = 0ms, Average = 0ms )

    Read the article

  • Oracle: Insertion on an indexed table, avoiding duplicates. Looking for tips and advice.

    - by Tom
    Hi everyone, Im looking for the best solution (performance wise) to achieve this. I have to insert records into a table, avoiding duplicates. For example, take table A Insert into A ( Select DISTINCT [FIELDS] from B,C,D.. WHERE (JOIN CONDITIONS ON B,C,D..) AND NOT EXISTS ( SELECT * FROM A ATMP WHERE ATMP.SOMEKEY = A.SOMEKEY ) ); I have an index over A.SOMEKEY, just to optimize the NOT EXISTS query, but i realize that inserting on an indexed table will be a performance hit. So I was thinking of duplicating Table A in a Global Temporary Table, where I would keep the index. Then, removing the index from Table A and executing the query, but modified Insert into A ( Select DISTINCT [FIELDS] from B,C,D.. WHERE (JOIN CONDITIONS ON B,C,D..) AND NOT EXISTS ( SELECT * FROM GLOBAL_TEMPORARY_TABLE_A ATMP WHERE ATMP.SOMEKEY = A.SOMEKEY ) ); This would solve the "inserting on an index table", but I would have to update the Global Temporary A with each insertion I make. I'm kind of lost here, Is there a better way to achieve this? Thanks in advance,

    Read the article

  • How can I do rapid application development with ASP.NET MVC?

    - by Erik Forbes
    I've been given a short amount of time (~80 hours to start with) to replace an existing Access database with a full-blown SQL + Web system, and I'm enumerating my options. I would like to use ASP.NET MVC, but I'm unsure of how to use it effectively with my short timetable. For the database backend I'll be using Linq to SQL as it's a product I already know and can get something working with it quickly. Does anyone have any experience with using ASP.NET MVC in this way and can share some insight? Edit: The reason I've been interested in ASP.NET MVC is because I know (100% confirmed) that there will be more work to do after this first round, and I'd like my maintenance work to be as easy as possible. In my experience Webforms applications tend to break down over repeated maintenance, despite discipline. Maybe there's a middle ground? How difficult would it to be for me to, say, build the app with Webforms, then migrate it to MVC later when I have more time budgeted to the project? Edit 2: Further background: the Access application I'm replacing is used in some capacity by everyone in the building, and since it was upgraded from Access 98 to 2003 it's been crashing daily, causing hours of lost productivity as people have to re-enter data since the last backup. This is the reason for the short amount of time - this is a critical business function, and they can't afford to keep re-entering data on a daily basis.

    Read the article

  • calling .ajax() from an eventHandler c# asp.ent

    - by ibininja
    Good day...! In the code behind (upload.aspx) I have an event that returns the number of bytes being streamed; and as I debug it, it works fine. I wanted to reflect the numbers returned from the eventHandler on a progress bar and this is where I got lost. I tried using jQuery's .ajax() function. this is how I implemented it: In the EventHandler in my code behind I added this code to call the .ajax() function: Page.ClientScript.RegisterStartupScript(this.GetType(), "UpdateProgress", "<script type='text/javascript'>updateProgress();</script>"); My plan is whenever the eventHandler function changes the values of bytes being streamed it calls the javascript function "updateProgress()" The .ajax() function "UpdateProgress()" is as: function updateProgress() { $.ajax({ type: "POST", url: "upload.aspx/GetData", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", async: true, success: function (msg) { $("#progressbar").progressbar("option", "value", msg.d); } }); } I made sure that the function GetData() is [System.Web.Services.WebMethod] and that it is static as well. so the workflow of what I am trying to implement is as: - Click On Upload button - The Behind code starts executing and EventHandler triggers - The EventHandler calls .ajax() function - The .ajax() function retrieves the bytes being streamed and updates progress bar. When I ran the code; all runs well except that the .ajax() is only executed when upload is finished (and progress bar also updates only when finished upload); even though I call .ajax() function every time in the eventHandler function as reflected above... What am I doing wrong? Am I thinking of this right? is there anything else I should add maybe an updatePanel or something? thank you

    Read the article

  • Misalignement in the output Bitmap created from a byte array

    - by Daniel
    I am trying to understand why I have troubles creating a Bitmap from a byte array. I post this after a careful scrutiny of the existing posts about Bitmap creation from byte arrays, like the followings: Creating a bitmap from a byte[], Working with Image and Bitmap in c#?, C#: Bitmap Creation using bytes array My code is aimed to execute a filter on a digital image 8bppIndexed writing the pixel value on a byte [] buffer to be converted again (after some processing to manage gray levels) in a 8BppIndexed Bitmap My input image is a trivial image created by means of specific perl code: https://www.box.com/shared/zqt46c4pcvmxhc92i7ct Of course, after executing the filter the output image has lost the first and last rows and the first and last columns, due to the way the filter manage borders, so from the original 256 x 256 image i get a 254 x 254 image. Just to stay focused on the issue I have commented the code responsible for executing the filter so that the operation really performed is an obvious: ComputedPixel = InputImage.GetPixel(myColumn, myRow).R; I know, i should use lock and unlock but I prefer one headache one by one. Anyway this code should be a sort of identity transform, and at last i use: private unsafe void FillOutputImage() { OutputImage = new Bitmap (OutputImageCols, OutputImageRows , PixelFormat .Format8bppIndexed); ColorPalette ncp = OutputImage.Palette; for (int i = 0; i < 256; i++) ncp.Entries[i] = Color .FromArgb(255, i, i, i); OutputImage.Palette = ncp; Rectangle area = new Rectangle(0, 0, OutputImageCols, OutputImageRows); var data = OutputImage.LockBits(area, ImageLockMode.WriteOnly, OutputImage.PixelFormat); Marshal .Copy (byteBuffer, 0, data.Scan0, byteBuffer.Length); OutputImage.UnlockBits(data); } The output image I get is the following: https://www.box.com/shared/p6tubyi6dsf7cyregg9e It is quite clear that I am losing a pixel per row, but i cannot understand why: I have carefully controlled all the parameters: OutputImageCols, OutputImageRows and the byte [] byteBuffer length and content even writing known values as way to test. The code is nearly identical to other code posted in stackOverflow and elsewhere. Someone maybe could help to identify where the problem is? Thanks a lot

    Read the article

  • Capture data read from file into string stream Java

    - by halluc1nati0n
    I'm coming from a C++ background, so be kind on my n00bish queries... I'd like to read data from an input file and store it in a stringstream. I can accomplish this in an easy way in C++ using stringstreams. I'm a bit lost trying to do the same in Java. Following is a crude code/way I've developed where I'm storing the data read line-by-line in a string array. I need to use a string stream to capture my data into (rather than use a string array).. Any help? char dataCharArray[] = new char[2]; int marker=0; String inputLine; String temp_to_write_data[] = new String[100]; // Now, read from output_x into stringstream FileInputStream fstream = new FileInputStream("output_" + dataCharArray[0]); // Convert our input stream to a BufferedReader BufferedReader in = new BufferedReader (new InputStreamReader(fstream)); // Continue to read lines while there are still some left to read while ((inputLine = in.readLine()) != null ) { // Print file line to screen // System.out.println (inputLine); temp_to_write_data[marker] = inputLine; marker++; }

    Read the article

  • Dependency between operations in scala actors

    - by paradigmatic
    I am trying to parallelise a code using scala actors. That is my first real code with actors, but I have some experience with Java Mulithreading and MPI in C. However I am completely lost. The workflow I want to realise is a circular pipeline and can be described as the following: Each worker actor has a reference to another one, thus forming a circle There is a coordinator actor which can trigger a computation by sending a StartWork() message When a worker receives a StartWork() message, it process some stuff locally and sends DoWork(...) message to its neighbour in the circle. The neighbours do some other stuff and sends in turn a DoWork(...) message to its own neighbour. This continues until the initial worker receives a DoWork() message. The coordinator can send a GetResult() message to the initial worker and wait for a reply. The point is that the coordinator should only receive a result when data is ready. How can a worker wait that the job returned to it before answering the GetResult() message ? To speed up computation, any worker can receive a StartWork() at any time. Here is my first try pseudo-implementation of the worker: class Worker( neighbor: Worker, numWorkers: Int ) { var ready = Foo() def act() { case StartWork() => { val someData = doStuff() neighbor ! DoWork( someData, numWorkers-1 ) } case DoWork( resultData, remaining ) => if( remaining == 0 ) { ready = resultData } else { val someOtherData = doOtherStuff( resultData ) neighbor ! DoWork( someOtherData, remaining-1 ) } case GetResult() => reply( ready ) } } On the coordinator side: worker ! StartWork() val result = worker !? GetResult() // should wait

    Read the article

  • Using authsmtp from a Grails server

    - by Simon
    This is quite a specific question, and I have had no luck on the grails nabble forum, so I thought I would post here. I am using the grails mail plug-in, but I think my question is a general one about using authsmtp as an email gateway from my server. I am having trouble sending mail from my app using authsmtp. I have installed and configured the mail plugin and was originally using my ISP's SMTP server to send mails. However when I deployed to AWS EC2 this failed because my elastic IP was blocked by the SMTP host. So I bought myself an authsmtp account and set up my server email address as an accepted one at authsmtp. I then changed my configuration in SecurityConfig.groovy to point to the authsmtp server that I had been designated... mailHost = "mail.authsmtp.com" mailUsername = "myusername" mailPassword = "mypassword" mailProtocol = "smtp" mailFrom = "[email protected]" mailPort = 2525 ...and I'm just trying to get this to work locally before I deploy back up to AWS. Sending mail fails and in my log I have this exception: 2010-02-13 10:59:44,218 [http-8080-1] ERROR service.EmailerService - Failed to send emails: Failed messages: com.sun.mail.smtp.SMTPSendFailedException: 513 5.0.0 Your email system must authenticate before sending mail. org.springframework.mail.MailSendException; nested exception details (1) are: Failed message 1: com.sun.mail.smtp.SMTPSendFailedException: 513 5.0.0 Your email system must authenticate before sending mail. at com.sun.mail.smtp.SMTPTransport.issueSendCommand(SMTPTransport.java:1388) at com.sun.mail.smtp.SMTPTransport.mailFrom(SMTPTransport.java:959) at com.sun.mail.smtp.SMTPTransport.sendMessage(SMTPTransport.java:583) I'm a bit lost since the username and password I provide in the configuration are definitely correct. A terse and not very helpful conversation with authsmtp support suggests that I need to MD5 and/or base64 encode my credentials before sending, so my question is in three parts... 1) any idea what's going on with the failure and why that message is appearing? 2) how would I encode the credentials to pass to authsmtp and how would I configure that for the mail plugin 3) has anyone successfully connected and sent mail through authsmtp from the mail plugin and specifically from AWS EC2?

    Read the article

  • Matlab and MrVista

    - by AnnaRaven
    I'm new to MATLAB and mrVista. I'm running Matlab Version 7.8.0.347 (R2009a) 32-bit(win32) from February 12, 2009 OS is Windows 7 Professional I downloaded the most recent MrVista_hourly.zip and extracted it into my C:\Program_Files_(x86)\MATLAB directory. I think I need to run mrvInstall, but when I do, I get the following: EDU>> mrvInstall Checking VISATSOFT installation. Windows, 32-bit, installation Checking and possibly installing .NET framework. This can take several minutes Checking for visualization library (.dll) files. You are missing msvcp70.dll. So, I'm completely lost at this point. Do I just need to download msvcp70.dll from the net? If so, is there a safe place to download it from? If there's some other way I'm supposed to get mrVista to work from MATLAB, instead of mrvInstall, please let me know that. Thanks in advance for your help. EDIT: I've downloaded and installed the dll and still isn't working. I'll go ask on Super User. Thanks for trying to help anyway.

    Read the article

  • Bugzilla Install question - I'm stuck

    - by Nabeel
    I run Bugzilla's checksetup.pl (migrating an older version), and it always returns: Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.005 Had to create DBD::mysql::dr::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Had to create DBD::mysql::db::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. There was an error connecting to MySQL: Undefined subroutine &DBD::mysql::db::_login called at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBD/mysql.pm line 142, <DATA> line 225. MySQL Version: [root@bugzilla-core TMP]# mysql --version mysql Ver 14.12 Distrib 5.0.60sp1, for redhat-linux-gnu (x86_64) using readline 5.1 And mysql_config: [root@bugzilla-core TMP]# mysql_config Usage: /data01/mysql-5.0.60/bin/mysql_config [OPTIONS] Options: --cflags [-I/data01/mysql-5.0.60/include -g] --include [-I/data01/mysql-5.0.60/include] --libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient -lz -lcrypt -lnsl -lm -lmygcc] --libs_r [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -lmygcc] --socket [/tmp/mysql.sock] --port [0] --version [5.0.60sp1] --libmysqld-libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -lmygcc] Now, I've tried the latest version of DBD-mysql (4.0.14). I'm completely lost and stumped. I'm not sure where to go from here. Scouring the 'webs haven't returned anything fruitful. Any ideas?

    Read the article

  • How to TDD Asynchronous Events?

    - by Padu Merloti
    The fundamental question is how do I create a unit test that needs to call a method, wait for an event to happen on the tested class and then call another method (the one that we actually want to test)? Here's the scenario if you have time to read further: I'm developing an application that has to control a piece of hardware. In order to avoid dependency from hardware availability, when I create my object I specify that we are running in test mode. When that happens, the class that is being tested creates the appropriate driver hierarchy (in this case a thin mock layer of hardware drivers). Imagine that the class in question is an Elevator and I want to test the method that gives me the floor number that the elevator is. Here is how my fictitious test looks like right now: [TestMethod] public void TestGetCurrentFloor() { var elevator = new Elevator(Elevator.Environment.Offline); elevator.ElevatorArrivedOnFloor += TestElevatorArrived; elevator.GoToFloor(5); //Here's where I'm getting lost... I could block //until TestElevatorArrived gives me a signal, but //I'm not sure it's the best way int floor = elevator.GetCurrentFloor(); Assert.AreEqual(floor, 5); } Edit: Thanks for all the answers. This is how I ended up implementing it: [TestMethod] public void TestGetCurrentFloor() { var elevator = new Elevator(Elevator.Environment.Offline); elevator.ElevatorArrivedOnFloor += (s, e) => { Monitor.Pulse(this); }; lock (this) { elevator.GoToFloor(5); if (!Monitor.Wait(this, Timeout)) Assert.Fail("Elevator did not reach destination in time"); int floor = elevator.GetCurrentFloor(); Assert.AreEqual(floor, 5); } }

    Read the article

  • At what line in the following code should I be commiting my UnitOfWork ?

    - by Pure.Krome
    Hi folks, I have the following code which is in a transaction. I'm not sure where/when i should be commiting my unit of work. If someone knows where, can they please explain WHY they have said, where? (i'm trying to understand the pattern through example(s), as opposed to just getting my code to work). Here's what i've got :- using (TransactionScope transactionScope = new TransactionScope(TransactionScopeOption.RequiresNew, new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted })) { _logEntryRepository.InsertOrUpdate(logEntry); //_unitOfWork.Commit(); // Here, commit #1 ? // Now, if this log entry was a NewConnection or an LostConnection, then we need to make sure we update the ConnectedClients. if (logEntry.EventType == EventType.NewConnection) { _connectedClientRepository.Insert(new ConnectedClient { LogEntryId = logEntry.LogEntryId }); //_unitOfWork.Commit(); // Here, commit #2 ? } // A (PB) BanKick does _NOT_ register a lost connection .. so we need to make sure we handle those scenario's as a LostConnection. if (logEntry.EventType == EventType.LostConnection || logEntry.EventType == EventType.BanKick) { _connectedClientRepository.Delete(logEntry.ClientName, logEntry.ClientIpAndPort); //_unitOfWork.Commit(); // Here, commit #3 ? } _unitOfWork.Commit(); // Here, commit #4 ? transactionScope.Complete(); } Cheers :)

    Read the article

  • How to add validation errors in the validation collection asp.net mvc?

    - by johndoe
    Inside my controller's action I have the following code: public ActionResult GridAction(string id) { if (String.IsNullOrEmpty(id)) { // add errors to the errors collection and then return the view saying that you cannot select the dropdownlist value with the "Please Select" option } return View(); UPDATE: if (String.IsNullOrEmpty(id)) { // add error ModelState.AddModelError("GridActionDropDownList", "Please select an option"); return RedirectToAction("Orders"); } } UPDATE 2: Here is my updated code: @Html.DropDownListFor(x => x.SelectedGridAction, Model.GridActions,"Please Select") @Html.ValidationMessageFor(x => x.SelectedGridAction) The Model looks like the following: public class MyInvoicesViewModel { private List<SelectListItem> _gridActions; public int CurrentGridAction { get; set; } [Required(ErrorMessage = "Please select an option")] public string SelectedGridAction { get; set; } public List<SelectListItem> GridActions { get { _gridActions = new List<SelectListItem>(); _gridActions.Add(new SelectListItem() { Text = "Export to Excel", Value = "1"}); return _gridActions; } } } And here is my controller action: public ActionResult GridAction(string id) { if (String.IsNullOrEmpty(id)) { // add error ModelState.AddModelError("SelectedGridAction", "Please select an option"); return RedirectToAction("Orders"); } return View(); } Nothing happens! I am totally lost on this one! UPDATE 3: I am now using the following code but still the validation is not firing: public ActionResult GridAction(string id) { var myViewModel= new MyViewModel(); myViewModel.SelectedGridAction = id; // id is passed as null if (!ModelState.IsValid) { return View("Orders"); }

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >