Search Results

Search found 58486 results on 2340 pages for 'data integrator'.

Page 754/2340 | < Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >

  • Using multiple sockets, is non-blocking or blocking with select better?

    - by JPhi1618
    Lets say I have a server program that can accept connections from 10 (or more) different clients. The clients send data at random which is received by the server, but it is certain that at least one client will be sending data every update. The server cannot wait for information to arrive because it has other processing to do. Aside from using asynchronous sockets, I see two options: Make all sockets non-blocking. In a loop, call recv on each socket and allow it to fail with WSAEWOULDBLOCK if there is no data available and if I happen to get some data, then keep it. Leave the sockets as blocking. Add all sockets to a fd_set and call select(). If the return value is non-zero (which it will be most of the time), loop through all the sockets to find the appropriate number of readable sockets with FD_ISSET() and only call recv on the readable sockets. The first option will create a lot more calls to the recv function. The second method is a bigger pain from a programming perspective because of all the FD_SET and FD_ISSET looping. Which method (or another method) is preferred? Is avoiding the overhead on letting recv fail on a non-blocking socket worth the hassle of calling select()? I think I understand both methods and I have tried both with success, but I don't know if one way is considered better or optimal. Only knowledgeable replies please!

    Read the article

  • MVC Paging and Sorting Patterns: How to Page or Sort Re-Using Form Criteria

    - by CRice
    What is the best ASP.NET MVC pattern for paging data when the data is filtered by form criteria? This question is similar to: http://stackoverflow.com/questions/1425000/preserve-data-in-net-mvc but surely there is a better answer? Currently, when I click the search button this action is called: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Search(MemberSearchForm formSp, int? pageIndex, string sortExpression) {} That is perfect for the initial display of the results in the table. But I want to have page number links or sort expression links re-post the current form data (the user entered it the first time - persisted because it is returned as viewdata), along with extra route params 'pageIndex' or 'sortExpression', Can an ActionLink or RouteLink (which I would use for page numbers) post the form to the url they specify? <%= Html.RouteLink("page 2", "MemberSearch", new { pageIndex = 1 })%> At the moment they just do a basic redirect and do not post the form values so the search page loads fresh. In regular old web forms I used to persist the search params (MemberSearchForm) in the ViewState and have a GridView paging or sorting event reuse it.

    Read the article

  • (EXCEL)VBA Spin button which steps through in an sql databases date time

    - by Gulredy
    I have an sql Database table in MySQL which have lots of rows with varied date time values. For example: 2012-08-21 10:10:00 <-- with these date there are around 12 rows 2012-08-21 15:31:00 <-- with these date there are around 5 rows 2012-08-22 11:40:00 <-- with these date there are around 10 rows 2012-08-22 12:17:00 <-- with these date there are around 9 rows 2012-08-22 12:18:00 <-- with these date there are around 7 rows 2012-08-25 07:21:00 <-- with these date there are around 6 rows If the user clicks on the SpinButton1_SpinUp() or SpinButton1_SpinDown() button then it should do the following: The SpinButton1_SpinUp() button should filter out those data from an sql table which is the next after what we are currently on now. Example: We have currently selected: 2012-08-21 15:31:00. The user hits the SpinUp button then the program selects those date from the database, which is the next higher value like this one: 2012-08-22 11:40:00. So the user hits the SpinUp button the data which is selected in the database will change from those with date: 2012-08-21 15:31:00 to those with date: 2012-08-22 11:40:00 The SpinButton1_SpinDown() will do exactly the reverse of the SpinUp button. When the user hits the SpinDown button the data which is selected in the database will change from those with date: 2012-08-21 15:31:00 to those with date 2012-08-21 10:10:00 So I think the date which we are currently on, should be stored in a variable. But on button hit not every bigger or lower data should be selected in the database, only those which are the closest bigger or the closest lower date. How can I do this? I hope I described my problem understandable. My native language is not english, so misunderstandings can occur! Please ask if you don't understand something! Thank you for reading!

    Read the article

  • HTTP Error 500.19 Internal Server Error

    - by Attilah
    I created and deployed a pretty simple WCF Service. but when accessing it from IE, I get this : HTTP Error 500.19 - Internal Server Error Description: The requested page cannot be accessed because the related configuration data for the page is invalid. Error Code: 0x80070005 Notification: BeginRequest Module: IIS Web Core Requested URL: http://localhost:80/ProductsService/ProductsService.svc Physical Path: C:\Users\Administrator\Documents\Visual Studio 2008\Projects\ProductsService\ProductsService\ProductsService.svc Logon User: Not yet determined Logon Method: Not yet determined Handler: Not yet determined Config Error: Cannot read configuration file Config File: \\?\C:\Users\Administrator\Documents\Visual Studio 2008\Projects \ProductsService\ProductsService\web.config Config Source: -1: 0: More Information... This error occurs when there is a problem reading the configuration file for the Web server or Web application. In some cases, the event logs may contain more information about what caused this error. here is the content of my web.config file : <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="dataConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </configSections> <dataConfiguration defaultDatabase="AdventureWorksConnection" /> <connectionStrings> <add name="AdventureWorksConnection" connectionString="Database=AdventureWorks; Server=(localhost)\SQLEXPRESS;Integrated Security=SSPI" providerName="System.Data.SqlClient" /> </connectionStrings> <system.serviceModel> <services> <service name="Products.ProductsService"> <endpoint address="" binding="basicHttpBinding" contract="Products.IProductsService" /> </service> </services> </system.serviceModel> </configuration>

    Read the article

  • How to pull UIImages from NSData from a socket.

    - by Jus' Wondrin'
    Hey all! I'm using ASyncSocket to move some UIImages from one device over to another. Essentially, on one device I have: NSMutableData *data = UIImageJPEGRepresentation(image, 0.1); if(isRunning){ [sock writeData:data withTimeout:-1 tag:0]; } So a new image will be added to the socket every so often (like a webcam). Then, on the other device, I am calling: [listenSocket readDataWithTimeout:1 tag:0]; which will respond with: - (void)onSocket:(AsyncSocket *)sock didReadData:(NSData *)data withTag:(long)tag { [responseData appendData:data]; [listenSocket readDataWithTimeout:1 tag:0]; } Essentially, what I want to be able to do is have an NSTimer going which will call @selector(PullImages): -(void) PullImages { In here, I want to be able to pull images out of ResponseData. How do I do that? There might not be a complete image yet, there might be multiple images, there might be one and a half images! I want to parse the NSData into each existing image! } Any assistance? Thanks in advance!

    Read the article

  • Sending mail with a Php with a pdf attachment

    - by Jake
    Hi, I'm trying to send an email from the php mail command. I've been able to what I've tried so far, but can't seem to get it to work with an attachment. I've looked around the web and the best code I've found led me to this: $fileatt_name = 'JuneFlyer.pdf'; $fileatt_type = 'application/pdf'; $fileatt = 'JuneFlyer.pdf'; $file = fopen($fileatt,'rb'); $data = fread($file,filesize($fileatt)); $data = chunk_split(base64_encode($data)); $MAEmail = "[email protected]"; mail("$email_address", "$subject", "$message", "From: ".$MAEmail."\n". "MIME-Version: 1.0\n". "Content-type: text/html; charset=iso-8859-1". "--{$mime_boundary}\n" . "Content-Type: {$fileatt_type};\n" . " name=\"{$fileatt_name}\"\n" . "Content-Disposition: attachment;\n" . " filename=\"{$fileatt_name}\"\n" . "Content-Transfer-Encoding: base64\n\n" .$data. "\n\n" ); There are two problems when I do this. First, the contents of the email dissappear. Second, there is an error on the attachment. "Adobe Reader could not open June_flyer.pdf because it is either not a supported file type or because the file has been damaged (for example it was sent as an email attachment and wasn't correctly decoded)" Any ideas of how to deal with this? Thanks, JB

    Read the article

  • when compiling,I write " gcc -g -Wall dene2 dene2.c", then gcc emits some trace

    - by gcc
    when I compile my code,I write " gcc -g -Wall dene2 dene2.c" in the console. then gcc emits some things on the screen. I havenot understand what it is and I cannot consturct any meaning. I have sorted in google but I havenot seen any information about thing which gcc emits on screen I am not saying examining all of the things which is at below,just show me "how to catch fish". (I couldnot find meaningful title ,for that reason ,sorry,) dene2: In function `_start': /build/buildd/eglibc-2.10.1/csu/../sysdeps/i386/elf/start.S:65: multiple definition of `_start' /usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/crt1.o:/build/buildd/eglibc-2.10.1 /csu/../sysdeps/i386/elf/start.S:65: first defined here dene2:(.rodata+0x0): multiple definition of `_fp_hw' /usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/crt1.o:(.rodata+0x0): first defined here dene2: In function `_fini': (.fini+0x0): multiple definition of `_fini' /usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/crti.o:(.fini+0x0): first defined here dene2:(.rodata+0x4): multiple definition of `_IO_stdin_used' /usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/crt1.o:(.rodata.cst4+0x0): first defined here dene2: In function `__data_start': (.data+0x0): multiple definition of `__data_start' /usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/crt1.o:(.data+0x0): first defined here dene2: In function `__data_start': (.data+0x4): multiple definition of `__dso_handle' /usr/lib/gcc/i486-linux-gnu/4.4.1/crtbegin.o:(.data+0x0): first defined here dene2: In function `_init': (.init+0x0): multiple definition of `_init' /usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/crti.o:(.init+0x0): first defined here /tmp/ccMlGkkV.o: In function `main': /home/fatih/Desktop/dene2.c:5: multiple definition of `main' dene2:(.text+0xb4): first defined here /usr/lib/gcc/i486-linux-gnu/4.4.1/crtend.o:(.dtors+0x0): multiple definition of `__DTOR_END__' dene2:(.dtors+0x4): first defined here collect2: ld returned 1 exit status

    Read the article

  • Does query plan optimizer works well with joined/filtered table-valued functions?

    - by smoothdeveloper
    In SQLSERVER 2005, I'm using table-valued function as a convenient way to perform arbitrary aggregation on subset data from large table (passing date range or such parameters). I'm using theses inside larger queries as joined computations and I'm wondering if the query plan optimizer work well with them in every condition or if I'm better to unnest such computation in my larger queries. Does query plan optimizer unnest table-valued functions if it make sense? If it doesn't, what do you recommend to avoid code duplication that would occur by manually unnesting them? If it does, how do you identify that from the execution plan? code sample: create table dbo.customers ( [key] uniqueidentifier , constraint pk_dbo_customers primary key ([key]) ) go /* assume large amount of data */ create table dbo.point_of_sales ( [key] uniqueidentifier , customer_key uniqueidentifier , constraint pk_dbo_point_of_sales primary key ([key]) ) go create table dbo.product_ranges ( [key] uniqueidentifier , constraint pk_dbo_product_ranges primary key ([key]) ) go create table dbo.products ( [key] uniqueidentifier , product_range_key uniqueidentifier , release_date datetime , constraint pk_dbo_products primary key ([key]) , constraint fk_dbo_products_product_range_key foreign key (product_range_key) references dbo.product_ranges ([key]) ) go . /* assume large amount of data */ create table dbo.sales_history ( [key] uniqueidentifier , product_key uniqueidentifier , point_of_sale_key uniqueidentifier , accounting_date datetime , amount money , quantity int , constraint pk_dbo_sales_history primary key ([key]) , constraint fk_dbo_sales_history_product_key foreign key (product_key) references dbo.products ([key]) , constraint fk_dbo_sales_history_point_of_sale_key foreign key (point_of_sale_key) references dbo.point_of_sales ([key]) ) go create function dbo.f_sales_history_..snip.._date_range ( @accountingdatelowerbound datetime, @accountingdateupperbound datetime ) returns table as return ( select pos.customer_key , sh.product_key , sum(sh.amount) amount , sum(sh.quantity) quantity from dbo.point_of_sales pos inner join dbo.sales_history sh on sh.point_of_sale_key = pos.[key] where sh.accounting_date between @accountingdatelowerbound and @accountingdateupperbound group by pos.customer_key , sh.product_key ) go -- TODO: insert some data -- this is a table containing a selection of product ranges declare @selectedproductranges table([key] uniqueidentifier) -- this is a table containing a selection of customers declare @selectedcustomers table([key] uniqueidentifier) declare @low datetime , @up datetime -- TODO: set top query parameters . select saleshistory.customer_key , saleshistory.product_key , saleshistory.amount , saleshistory.quantity from dbo.products p inner join @selectedproductranges productrangeselection on p.product_range_key = productrangeselection.[key] inner join @selectedcustomers customerselection on 1 = 1 inner join dbo.f_sales_history_..snip.._date_range(@low, @up) saleshistory on saleshistory.product_key = p.[key] and saleshistory.customer_key = customerselection.[key] I hope the sample makes sense. Much thanks for your help!

    Read the article

  • A dynamic array of class "landmark", inside another single class "landmarks"

    - by pinnacler
    I'm working on a robot localization simulator and I created a class called "landmark". The end result is going to be a robot that is always centered and always faces the top of the screen. As it turns, the birds eye view map will rotate around the robot. To accomplish this, I'm assuming I can rotate one class and have all elements inside rotate as well. So, the landmark class has properties x,y, label, and radius. This is suppose to simulate a tree location in a forest. To test everything, I need "forest data," and I wrote a script to generate 100 trees in a 100m x 100m area. The script automatically generates values within an acceptable range for x,y, radius. The generated data is stored in an object called tempForest and is 100x3. Ideally, I want to create a class called "landmarks" (plural) that has 100 landmark instances inside. How would I instantiate 100 instances of landmark in one instance of landmarks using that randomly generated data? Ideally, I'd just type treeBeacons = landmarks(); and it would randomly populate 100 (user definable, set in config file) instances with x, y, radius data. I'm not sure how to deal with a dynamic array of class "Landmark", inside another single class "landmarks." Any ideas?

    Read the article

  • Using LINQ to SQL and chained Replace

    - by White Dragon
    I have a need to replace multiple strings with others in a query from p in dx.Table where p.Field.Replace("A", "a").Replace("B", "b").ToLower() = SomeVar select p Which provides a nice single SQL statement with the relevant REPLACE() sql commands. All good :) I need to do this in a few queries around the application... So i'm looking for some help in this regard; that will work as above as a single SQL hit/command on the server It seems from looking around i can't use RegEx as there is no SQL eq Being a LINQ newbie is there a nice way for me to do this? eg is it possible to get it as a IQueryable "var result" say and pass that to a function to add needed .Replace()'s and pass back? Can i get a quick example of how if so? EDIT: This seems to work! does it look like it would be a problem? var data = from p in dx.Videos select p; data = AddReplacements(data, checkMediaItem); theitem = data.FirstOrDefault(); ... public IQueryable<Video> AddReplacements(IQueryable<Video> DataSet, string checkMediaItem) { return DataSet.Where(p => p.Title.Replace(" ", "-").Replace("&", "-").Replace("?", "-") == checkMediaItem); }

    Read the article

  • How to call stored proc from ASP.Net MVC stack via the ORM & return them in json?

    - by melaos
    Hi guys, i'm a total newbie with asp.net mvc and here's my jam: i have a 3 level list box which selection on box A shows options on box B and selection on box B will show the options for box C. I'm trying to do the whole thing in asp.net MVC and what i see is that the nerd dinner tutorial uses the ORM method. so i created a dbml to the database and drag the stored proc inside. i create a datacontext object but i don't quite know how to connect the result from the stored proce which should be multiple rows of data and make it into a json. so i can keep all the json data inside the html page and using jquery i could make the selection process faster. i don't expect the data inside the three boxes to change so often thus i think this method should be quite viable. Questions: So how do i get the stored proc part to return the data as json? i've noticed some tutorial online that the json return result part is at the controller and not at the model end. Why is that?

    Read the article

  • How do I send an XML document to an ASP.NET MVC page for manipuation

    - by Decker
    I have some hierarchical data stored as an multiple XML files on the server according to a vendor's schema. In my ASP.NET MVC (2!) application, I'd like the user to choose one of these hierarchies (i.e. file -- I provide a list in my controller's Index action). When the user selects one to "edit" my edit action should return a page that presents the XML hierarchy (it's a representation of a folder tree). So my thoughts are that the view would return HTML that contained a JQuery on load ajax call back to the server for the XML data -- at which point I would present the tree using one of the many JQuery tree controls. On the client side I'd like the user to manipulate the tree and when done, I'd like to post back the new hierarchy where I would replace the original XML file that represents that hierarchy. So my questions are: What form should I use to send the data down? XML or JSON?. If I send down XML then I would have to not only read the XML -- which JQuery can do -- but I would also have to be able to modify that XML and then send it back. Can I use JQuery to modify this XML DOM? And will all the namespace declarations be preserved? What form should I send the data back? If I originally sent the client the hierarchy as JSON (using JsonResult), then presumably I would have a hierarchy of javascript objects. What options would I have to post that back? Would I have to recreate the XML reprentation on the client and post that back? Or should I serialize back to JSON, post that to the server, and then have the server do the work of recreating the XML according to the schema. Thanks for any advice.

    Read the article

  • Wanted: How to reliably, consistently select an MKMapView annotation

    - by jdandrea
    After calling MKMapView's setCenterCoordinate:animated: method (without animation), I'd like to call selectAnnotation:animated: (with animation) so that the annotation pops out from the newly-centered pushpin. For now, I simply watch for mapViewDidFinishLoadingMap: and then select the annotation. However, this is problematic. For instance, this method isn't called when there's no need to load additional map data. In those cases, my annotation isn't selected. :( Very well. I could call this immediately after setting the center coordinate instead. Ahh, but in that case it's possible that there is map data to load (but it hasn't finished loading yet). I'd risk calling it too soon, with the animation becoming spotty at best. Thus, if I understand correctly, it's not a matter of knowing if my coordinate is visible, since it's possible to stray almost a screenful of distance and have to load new map data. Rather, it's a matter of knowing if new map data needs to be loaded, and then acting accordingly. Any ideas on how to accomplish this, or how to otherwise (reliably) select an annotation after re-centering the map view on the coordinate where that annotation lives? Clues appreciated - thanks!

    Read the article

  • My First F# program

    - by sudaly
    Hi I just finish writing my first F# program. Functionality wise the code works the way I wanted, but not sure if the code is efficient. I would much appreciate if someone could review the code for me and point out the areas where the code can be improved. Thanks Sudaly open System open System.IO open System.IO.Pipes open System.Text open System.Collections.Generic open System.Runtime.Serialization [<DataContract>] type Quote = { [<field: DataMember(Name="securityIdentifier") >] RicCode:string [<field: DataMember(Name="madeOn") >] MadeOn:DateTime [<field: DataMember(Name="closePrice") >] Price:float } let m_cache = new Dictionary<string, Quote>() let ParseQuoteString (quoteString:string) = let data = Encoding.Unicode.GetBytes(quoteString) let stream = new MemoryStream() stream.Write(data, 0, data.Length); stream.Position <- 0L let ser = Json.DataContractJsonSerializer(typeof<Quote array>) let results:Quote array = ser.ReadObject(stream) :?> Quote array results let RefreshCache quoteList = m_cache.Clear() quoteList |> Array.iter(fun result->m_cache.Add(result.RicCode, result)) let EstablishConnection() = let pipeServer = new NamedPipeServerStream("testpipe", PipeDirection.InOut, 4) let mutable sr = null printfn "[F#] NamedPipeServerStream thread created, Wait for a client to connect" pipeServer.WaitForConnection() printfn "[F#] Client connected." try // Stream for the request. sr <- new StreamReader(pipeServer) with | _ as e -> printfn "[F#]ERROR: %s" e.Message sr while true do let sr = EstablishConnection() // Read request from the stream. printfn "[F#] Ready to Receive data" sr.ReadLine() |> ParseQuoteString |> RefreshCache printfn "[F#]Quot Size, %d" m_cache.Count let quot = m_cache.["MSFT.OQ"] printfn "[F#]RIC: %s" quot.RicCode printfn "[F#]MadeOn: %s" (String.Format("{0:T}",quot.MadeOn)) printfn "[F#]Price: %f" quot.Price

    Read the article

  • Swingworker producing duplicate output/output out of order?

    - by Stefan Kendall
    What is the proper way to guarantee delivery when using a SwingWorker? I'm trying to route data from an InputStream to a JTextArea, and I'm running my SwingWorker with the execute method. I think I'm following the example here, but I'm getting out of order results, duplicates, and general nonsense. Here is my non-working SwingWorker: class InputStreamOutputWorker extends SwingWorker<List<String>,String> { private InputStream is; private JTextArea output; public InputStreamOutputWorker(InputStream is, JTextArea output) { this.is = is; this.output = output; } @Override protected List<String> doInBackground() throws Exception { byte[] data = new byte[4 * 1024]; int len = 0; while ((len = is.read(data)) > 0) { String line = new String(data).trim(); publish(line); } return null; } @Override protected void process( List<String> chunks ) { for( String s : chunks ) { output.append(s + "\n"); } } }

    Read the article

  • jQuery Autocomplete & jTemplates - handling response

    - by Diegos Grace
    Has anyone had any experience with using jTemplates to display autocomplete results. I have the following $("#address-search").autocomplete({ source: "/Address/SearchAddress", minLength: 2, delay: 400, focus: function (event, ui) { $('#address-search').val(ui.item.name); return false; }, parse: function(data) { $("#autocomplete-results").setTemplate($("#templateHolder").html()); $("#autocomplete-results").processTemplate(data); }, select: function (event, ui) { $('#address-search').val(ui.item.name); $('#search-address-id').val(ui.item.id); $('#search-description').html(ui.item.address); }); and the simple jtemplate holder: <script type="text/html" id="templateHolder"> <ul class="autocomplete"> {#foreach $T as data} <li>{$T.name}</li> {#/for} </ul> </script> Above i'm using 'Parse' to format results, I've also tried the autocomplete result method but not having any luck so far. The only success I've had is by using the private method ._renderItem and formatting the data that way but we want to render the output using the jTemplate. Any advice appreciated.

    Read the article

  • moving audio over a local network using GStreamer

    - by James Turner
    I need to move realtime audio between two Linux machines, which are both running custom software (of mine) which builds on top of Gstreamer. (The software already has other communication between the machines, over a separate TCP-based protocol - I mention this in case having reliable out-of-band data makes a difference to the solution). The audio input will be a microphone / line-in on the sending machine, and normal audio output as the sink on the destination; alsasrc and alsasink are the most likely, though for testing I have been using the audiotestsrc instead of a real microphone. GStreamer offers a multitude of ways to move data round over networks - RTP, RTSP, GDP payloading, UDP and TCP servers, clients and sockets, and so on. There's also many examples on the web of streaming both audio and video - but none of them seem to work for me, in practice; either the destination pipeline fails to negotiate caps, or I hear a single packet and then the pipeline stalls, or the destination pipeline bails out immediately with no data available. In all cases, I'm testing on the command-line just gst-launch. No compression of the audio data is required - raw audio, or trivial WAV, uLaw or aLaw encoding is fine; what's more important is low-ish latency.

    Read the article

  • DataTable vs. Collection in .Net

    - by B Pete
    I am writing a program that needs to read a set of records that describe the register map of a device I need to communicate with. Each record will have a handfull of fields that describe the properties of each register. I don't really need to edit or modify the data in my VB or C# program, though I would like to be able to display the data on a grid. I would like to store the data in a CSV file, or perhaps an XML file. I need to enable users to edit the data off-line, preferably in excel. I am considering using a DataTable or a Collection of "Register" objects (which I would define). I prototyped a DataTable, and found I can read/write XML easily using the built in methods and I can easily bind to a DataGridView. I was not able to find a way to retreive info on a single register without using a query that returns a collection of rows, even though I defined a unique primaty key column. The syntax to get a value from a column is also complex, though I could be missing something on both counts. I'm tempted to use a collection of "Register" objects that I can access via a unique key. It would be a little more coding up front, but seems like a cleaner solution overall. I should still be able to use LINQ to dataset to query subsets of registers when I need them, but would also be able to grab a single field using a the key value, something like this: Registers(keyValue).fieldName). Which would be a cleaner approach to the problem? Is there a way to read/write XML into a Collection without needing custom code? Could this be accomplished using String for a key?

    Read the article

  • Adding trend lines/boxplots (by group) in ggplot2

    - by Tal Galili
    Hi all, I have 40 subjects, of two groups, over 15 weeks, with some measured variable (Y). I wish to have a plot where: x = time, y = T, lines are by subjects and colours by groups. I found it can be done like this: TIME <- paste("week",5:20) ID <- 1:40 GROUP <- sample(c("a","b"),length(ID), replace = T) group.id <- data.frame(GROUP, ID) a <- expand.grid(TIME, ID) colnames(a) <-c("TIME", "ID") group.id.time <- merge(a, group.id) Y <- rnorm(dim(group.id.time)[1], mean = ifelse(group.id.time$GROUP =="a",1,3) ) DATA <- cbind(group.id.time, Y) qplot(data = DATA, x=TIME, y=Y, group=ID, geom = c("line"),colour = GROUP) But now I wish to add to the plot something to show the difference between the two groups (for example, a trend line for each group, with some CI shadelines) - how can it be done? I remember once seeing the ggplot2 can (easily) do this with geom_smooth, but I am missing something about how to make it work. Also, I wondered at maybe having the lines be like a boxplot for each group (with a line for the different quantiles and fences and so on). But I imagine answering the first question would help me resolve the second. Thanks.

    Read the article

  • JQuery Post-Request question - FF doesn't get the result of the referenced php page

    - by OlliD
    Dear community, I just want to have my question posted here but just from the beginning: For a personal web project I use PHP + JQuery. Now I got stuck when I try to use the ajax posting method to send data to another php-page. I planned to have some navigational elements like next + previous on the bottom of the page by saving the user input / user given data. The code looks as follows: <div id="bottom_nav"> <div id="prev" class="flt_l"><a href="?&step=<?= $pages[$step-1] ?>">next</a></div> <div id="next" class="flt_r"><a href="?&step=<?= $pages[$step+1] ?>">previous</a></div> </div> The functionality of the page works fine. Lateron, I use the following code to sent data over via POST: $("#bottom_nav a").click( function() { alert("POST-Link: Parameter=" + $("#Parameter").val()); $.ajax( { type:"post", url:"saveParameter.php", data:"Parameter=" + $("#Parameter").val(), success: function(result) { alert(result); //$("#test").text(result); } }); }); The request itself work perfectly on IE, but on FF I'm not able to get back any result. within the PHP page, there just written: <? echo $_POST['Parameter']; ?> As IE returns the correct value, FF just provide an empty message box. I assumed that the behaviour on the -Link is different. While IE seems to handle the click event after the JS-Code execution, FF will interpret it before. My question was whether you has a solution on this regarding restructuring the code itself or using another method to reach the intened behaviour. Thanks for your assistance and recommendations, Olli

    Read the article

  • NHibernate.MappingException - Troubles Shooting Checklist (no persister for)

    - by Berryl
    Here's a starter list: 1) if hbm is hand generated, is it an embedded resource? 2) if using FNH, does it pass a PerssistenceSpecification test? 3) if not using FNH, can you save and then load the persisted class? 4) more? I'm sure many of you have gotten this one at one point or another. But have you ever gotten it when you knew your mapping was set up correctly? I started getting this exception after I started using a new repository design, but only in one scenario! PersistenceSpecification tests pass, as do all repository methods (using SQLite). The scenario that leads to the exception is when legacy projects from a different db are converted to green field system. The legacy system is from a different database and has it's own session factory, which should be irrelevant because the error comes after previously unconverted Projects are retrieved and in memory. As the routine tries to save these unconverted Projects into the new database, the exception is thrown, full stack trace below. Any ideas on how to build up the trouble shooting check list and solves this problem? Cheers, Berryl === the Exception trace ===== failed: NHibernate.MappingException : No persister for: Smack.ConstructionAdmin.Domain.Model.Projects.Project at NHibernate.Impl.SessionFactoryImpl.GetEntityPersister(String entityName) at NHibernate.Impl.SessionImpl.GetEntityPersister(String entityName, Object obj) at NHibernate.Engine.ForeignKeys.IsTransient(String entityName, Object entity, Nullable`1 assumed, ISessionImplementor session) at NHibernate.Event.Default.AbstractSaveEventListener.GetEntityState(Object entity, String entityName, EntityEntry entry, ISessionImplementor source) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.PerformSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.OnSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Impl.SessionImpl.FireSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Impl.SessionImpl.SaveOrUpdate(Object obj) NHibernate\Repository\FabioNHibRepository.cs(46,0): at Smack.Core.Data.NHibernate.Repository.FabioNHibRepository`1.Add(T item) LegacyConversion\LegacyBatchUpdater.cs(20,0): at Smack.ConstructionAdmin.Data.LegacyConversion.LegacyBatchUpdater.ConvertOpenLegacyProjects(ILegacyProjectDao legacyProjectDao, IProjectRepository greenProjectRepository) Data\Brownfield\ProjectBatchUpdate_SQLiteTests.cs(19,0): at Smack.ConstructionAdmin.Tests.Data.Brownfield.ProjectBatchUpdate_SQLiteTests.Test()

    Read the article

  • MySQL MyISAM table performance... painfully, painfully slow

    - by Salman A
    I've got a table structure that can be summarized as follows: pagegroup * pagegroupid * name has 3600 rows page * pageid * pagegroupid * data references pagegroup; has 10000 rows; can have anything between 1-700 rows per pagegroup; the data column is of type mediumtext and the column contains 100k - 200kbytes data per row userdata * userdataid * pageid * column1 * column2 * column9 references page; has about 300,000 rows; can have about 1-50 rows per page The above structure is pretty straight forwad, the problem is that that a join from userdata to page group is terribly, terribly slow even though I have indexed all columns that should be indexed. The time needed to run a query for such a join (userdata inner_join page inner_join pagegroup) exceeds 3 minutes. This is terribly slow considering the fact that I am not selecting the data column at all. Example of the query that takes too long: SELECT userdata.column1, pagegroup.name FROM userdata INNER JOIN page USING( pageid ) INNER JOIN pagegroup USING( pagegroupid ) Please help by explaining why does it take so long and what can i do to make it faster. Edit #1 Explain returns following gibberish: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE userdata ALL pageid 372420 1 SIMPLE page eq_ref PRIMARY,pagegroupid PRIMARY 4 topsecret.userdata.pageid 1 1 SIMPLE pagegroup eq_ref PRIMARY PRIMARY 4 topsecret.page.pagegroupid 1 Edit #2 SELECT u.field2, p.pageid FROM userdata u INNER JOIN page p ON u.pageid = p.pageid; /* 0.07 sec execution, 6.05 sec fecth */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE u ALL pageid 372420 1 SIMPLE p eq_ref PRIMARY PRIMARY 4 topsecret.u.pageid 1 Using index SELECT p.pageid, g.pagegroupid FROM page p INNER JOIN pagegroup g ON p.pagegroupid = g.pagegroupid; /* 9.37 sec execution, 60.0 sec fetch */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE g index PRIMARY PRIMARY 4 3646 Using index 1 SIMPLE p ref pagegroupid pagegroupid 5 topsecret.g.pagegroupid 3 Using where Moral of the story Keep medium/long text columns in a separate table if you run into performance problems such as this one.

    Read the article

  • What is the difference between cubes and the Unified Dimensional Model (if any)?

    - by ngm
    I'm currently researching SQL Server 2008 as a business intelligence solution, and currently looking at Analysis Services (and I'm pretty new to business intelligence as a whole...) I'm a bit confused by some of the terms in SSAS, particularly the conceptual differences between cubes and MS's Unified Dimensional Model. I believe that a cube in SSAS is basically an OLAP cube -- dimensions, measures, something that sits between the underlying data source and a business user. But then that's kind of what I understand UDM to be as well. The docs for SQL Server 2005 seem to suggest as much: "A cube is essentially synonymous with a Unified Dimensional Model (UDM)". But then the SQL Server 2008 pages sort of suggest that UDM is a wrapper for both multidimensional data (cubes) and relational data: "Use the Unified Dimensional Model to provide one consolidated business view for relational and multidimensional data that includes business entities, business logic, calculations, and metrics." This blog post suggests similarly: "UDM provides a single dimensional model for all OLAP analysis and relational reporting needs. So you can use either MDX or SQL" Is UDM something that sits above cubes? Or are they the same thing? I presume I would develop cubes with the Cube Designer application; what would I develop a UDM with?

    Read the article

  • Getting content from PHP: Trouble with POST and query.

    - by vgm64
    Apologies for my longest question on SO ever. I'm trying to interface with a php frontend for a mysql database in ROOT (a CERN framework in C++ for high energy physics analysis). To start off with, I tried to get this php interface to play nice with wget and curl first because I'm more familiar with them. The following command works: wget --post-data "hostname=localhost:3306&un=joeuser&pw=psswd&myquery=show_spazio_databases;" http://some.host.edu/log/log_query_matlab.php The results are: database1 database2 That's good. If I leave out the --post-data then I get the result: Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'admin'@'localhost' (using password: NO) in /log/log_query_matlab.php on line 6 i'm dead! Access denied for user 'admin'@'localhost' (using password: NO) Warning: mysql_query() [function.mysql-query]: Access denied for user 'admin'@'localhost' (using password: NO) in /log/log_query_matlab.php on line 29 Warning: mysql_query() [function.mysql-query]: A link to the server could not be established in /log/log_query_matlab.php on line 29 I have access to the php script (read only), but the error itself isn't too important. What matters it that using ROOT, I use a function called as socket.SendRaw(message, message.Length()) (socket is a TSocket) and this gives me the same "error" as wget without the post data switch if my "message" is "POST http://some.host.edu/log/log_query_matlab.php?hostname=localhost:3306&un=joeuser&pw=psswd&myquery=show_spazio_databases" This may be in vain, but does someone knows a way I should format the "message" that includes something that is equivalent to the --post-data switch. Or, is there a standard way to format POST requests in a single line (I've seen multi-line stuff. Is that right?) Sorry I'm clueless! PS. The mysql query is show databases but the space has been replaced with _spazio_, Italian for space. The author of the db and php interface requires it (and various replacements for symbols), but has anyone seen this before? Trying to troubleshoot that was terrible!

    Read the article

  • Unexpected Event Behavior When Using VB6 with COM Interop (C#)

    - by Randal
    We are using a COM Interop (C#) to allow for a VB6 application to send data to a server. Once the server receives the data, the managed code will raise a DataSent event. This event is only fired after a correlation ID is returned to the original caller. About 1% of the time, we've encountered VB6 executing the raised event before finishing the function that originally sent the data. Using the following code: ' InteropTester.COMEvents is the C# object ' Dim WithEvents m_ManagedData as InteropTester.COMEvents Private Sub send_data() Set m_ManagedData = new COMEvents Dim id as Integer ' send 5 to using the managed interop object ' id = m_ManagedData.SendData(5) LogData "ID " & id & " was returned" m_correlationIds.Add id End Sub Private Sub m_ManagedData_DataSent(ByVal sender as Variant, ByVal id as Integer) LogData "Data was successfully sent to C#" ' check if the returned ID is in the m_correlationIds collection goes here' End Sub We can verify that the id is returned with a value when we call m_ManagedData.SendData(5), but the logs then show that the m_ManagedData_DataSent is occasionally called before send_data ends. How is possible for VB6 to access the Message Loop to know that the DataSent event was raised before exiting send_data()? We are not calling DoEvents and everything within VB6 is synchronous. Thanks in advance for your help.

    Read the article

< Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >