Search Results

Search found 59975 results on 2399 pages for 'data comparison'.

Page 796/2399 | < Previous Page | 792 793 794 795 796 797 798 799 800 801 802 803  | Next Page >

  • How reliable is DateTime.Utc in Silverlight applications?

    - by Edward Tanguay
    I have a silverlight application which users will be running in various time zones. The applications load their data from the server at one time, then cache it in IsolatedStorage. When I make changes to the data on the server, I want to be able to change the "last updated time" so that all applications download the newest data the next time they check this date. However, I'm a bit confused as to how to handle the time zone issue since a if the server is in New York and the update time is set to 2010-01-01 17:00:00 and a client in Seattle checks compares it to its local time of 2010-01-01 14:00:00 it won't update and will continue to provide old data for three more hours. My solution is to always post the update time in UTC time, not with the time on the server, then make the Silverlight app check with DateTime.UtcNow. Is this as easy as it sounds or are their issues with this, e.g. that timezones are not set correctly on computers and hence the SilverlightApp does not report the correct UTC time. Can anyone say from experience how likely it is that using DateTime.UtcNow like this for cache refreshing will work in all cases? If DateTime.UtcNow is not reliable, I will just use an incremented "DataVersion" integer but there are other scenarios in which getting time zone sychronization down would make it useful thoroughly understand how to solve this in silverlight apps.

    Read the article

  • Passing object(s) to a Controller Action

    - by Nicholas H
    I'm attempting to use jQuery to do a $.post() to an MVC controller action. Here's the jQuery calls: var _authTicket = { username: 'testingu', password: 'testingp' }; function DoPost() { var inputObj = { authTicket: _authTicket, dateRange: 'ThisWeek' }; $.post('/MyController/MyAction', inputObj, PostResult); } function PostResult(data) { alert(JSON.stringify(data)); } Here's the controller action: <HttpPost()> _ Function MyAction(ByVal authTicket As AuthTicket, ByVal dateRange As String) As ActionResult Dim u = ValidateUser(authTicket) If u Is Nothing Then Throw New Exception("User not valid") ' etc.. End Function The problem is, MyAction's "authTicket" parameter is all empty. The second "dateRange" parameter gets populated just fine. I'm pretty sure that the issue stems from jQuery turning the data into key/value pairs to POST to the action. MVC must not understand the format, at least when it's an object with it's own properties. The name/value pair format that jQuery is converting to is like: authTicket[username] = "testingu" authTicket[password] = "testingp" Which in turn gets made into x-www-form-urlencoded'd data: authTicket%5Busername%5D=testingu &authTicket%5Bpassword%5D=testingp I guess I could always have "username" and "password" properties in my actions, but that seems wrong to me. Any help would be appreciated.

    Read the article

  • How to debug jQuery ajax POST to .NET webservice

    - by Nick
    Hey all. I have a web service that I have published locally. I wrote an HTML page to test the web service. The webservice is expecting a string that will be XML. When I debug the web service from within VS everything works great. I can use FireBug and see that my web service is indeed being hit by they AJAX call. After deploying the web service locally to IIS it does not appear to be working. I need to know the best way to trouble shoot this problem. My AJAX js looks like this: function postSummaryXml(data) { $.ajax({ type: 'POST', url: 'http://localhost/collection/services/datacollector.asmx/SaveSummaryDisplayData', data: { xml: data }, error: function(result) { alert(result); }, success: function(result) { alert(result); } }); The web method looks like so: [WebMethod] public string SaveSummaryDisplayData(string xml) { XmlDocument xmlDocument = new XmlDocument(); xmlDocument.LoadXml(xml); XmlParser parser = new XmlParser(xmlDocument); IModel repository = new Repository(); if(repository.AddRecord("TestData",parser.TestValues)) { return "true"; } return "false"; } } I added the return string to the web method in an attempt to ensure that the web-service is really hit from the ajax. The problem is this: In FF the 'Success' function is always called yet the result is always 'NULL'. And the web method does not appear to have worked (no data saved). In IE the 'Error' function is called and the response is server error: 500. I think (believe it or not) IE is giving me a more accurate indication that something is wrong but I cannnot determine what the issue is. Can someone please suggest how I should expose the specific server error message? Thanks for the help!

    Read the article

  • Spell checker software

    - by Naren
    Hello Guys, I have been assigned a task to find a decent spell checker (UK English) preferably the free one for a project that we are doing. I have looked at Google AJAX API for this. The project contains some young person's (kids less than 18 years old) data which shouldn't allow exposing or storing outside the application boundaries. Google logs the data for research purpose that means Google owns the data whatever we send over the wire through Google API. Is this right? I fired an email to Google regarding the privacy of data and storage but they haven't come back. If you have some knowledge regarding this please share with me. At this point our servers might not have access to external entities that means we might not be able to use Web API for this over the wire. But it may change in the future. That means I have to find out some spell checker alternatives that can sit in our environment and do the job or an external APIs. Would you mind share your findings and knowledge in this regard. I would prefer free services but never know if you have some cracking spell checker for a few quid’s then I don't mind recommending to the project board. Technology using ASP.NET 3.5/4.0, MVC, jQuery, SQL Sever 2008 etc Cheers, Naren

    Read the article

  • Java NIO (Netty): How does Encryption or GZIPping work in theory (with filters)

    - by Tom
    Hello Experts, i would be very thankfull if you can explain to me, how in theory the "Interceptor/Filter" Pattern in ByteStreams (over Sockets/Channels) work (in Asynchronous IO with netty) in regard to encryption or compression of data. Given I have a Filter that does GZIPPING. How is this internally implemented? Does the Filter "collect" so many bytes form the channel, that this is a usefull number of bytes that can then be en/decoded? What is in general the minimal "blocksize(data to encode/decode in a chunk)" of socket based gzipping? Does this "blocksize" have to be negotiated in advance between server and client? What happens if the client does not send enough data to "fill" the blocksize (due to a network conquestion) but does not close the connection. Does this mean the other side will simply wait until it gets enough bytes to decode or until a timeout occoures...How is the Filter pattern the applied? The compression filter will de/compress the blocksize of bytes and then store them again in the same buffer would (in the case of netty) i normally be using the ChannelHanlderContext to pass the de/encoded data to the next filter?... Any explanations/links/tutorials (for beginners;-) will be very much appreciated to help me understand how for example encryption/compressing are implemented in socket based communication with filters/interceptor pattern. thank you very much tom

    Read the article

  • MPI hypercube broadcast error

    - by luvieere
    I've got a one to all broadcast method for a hypercube, written using MPI: one2allbcast(int n, int rank, void *data, int count, MPI_Datatype dtype) { MPI_Status status; int mask, partner; int mask2 = ((1 << n) - 1) ^ (1 << n-1); for (mask = (1 << n-1); mask; mask >>= 1, mask2 >>= 1) { if (rank & mask2 == 0) { partner = rank ^ mask; if (rank & mask) MPI_Recv(data, count, dtype, partner, 99, MPI_COMM_WORLD, &status); else MPI_Send(data, count, dtype, partner, 99, MPI_COMM_WORLD); } } } Upon calling it from main: int main( int argc, char **argv ) { int n, rank; MPI_Init (&argc, &argv); MPI_Comm_size (MPI_COMM_WORLD, &n); MPI_Comm_rank (MPI_COMM_WORLD, &rank); one2allbcast(floor(log(n) / log (2)), rank, "message", sizeof(message), MPI_CHAR); MPI_Finalize(); return 0; } compiling and executing on 8 nodes, I receive a series of errors reporting that processes 1, 3, 5, 7 were stopped before the point of receiving any data: MPI_Recv: process in local group is dead (rank 1, MPI_COMM_WORLD) Rank (1, MPI_COMM_WORLD): Call stack within LAM: Rank (1, MPI_COMM_WORLD): - MPI_Recv() Rank (1, MPI_COMM_WORLD): - main() MPI_Recv: process in local group is dead (rank 3, MPI_COMM_WORLD) Rank (3, MPI_COMM_WORLD): Call stack within LAM: Rank (3, MPI_COMM_WORLD): - MPI_Recv() Rank (3, MPI_COMM_WORLD): - main() MPI_Recv: process in local group is dead (rank 5, MPI_COMM_WORLD) Rank (5, MPI_COMM_WORLD): Call stack within LAM: Rank (5, MPI_COMM_WORLD): - MPI_Recv() Rank (5, MPI_COMM_WORLD): - main() MPI_Recv: process in local group is dead (rank 7, MPI_COMM_WORLD) Rank (7, MPI_COMM_WORLD): Call stack within LAM: Rank (7, MPI_COMM_WORLD): - MPI_Recv() Rank (7, MPI_COMM_WORLD): - main() Where do I go wrong?

    Read the article

  • Loading a RSA private key from memory using libxmlsec

    - by ereOn
    Hello, I'm currently using libxmlsec into my C++ software and I try to load a RSA private key from memory. To do this, I searched trough the API and found this function. It takes binary data, a size, a format string and several PEM-callback related parameters. When I call the function, it just stucks, uses 100% of the CPU time and never returns. Quite annoying, because I have no way of finding out what is wrong. Here is my code: d_xmlsec_dsig_context->signKey = xmlSecCryptoAppKeyLoadMemory( reinterpret_cast<const xmlSecByte*>(data), static_cast<xmlSecSize>(datalen), xmlSecKeyDataFormatBinary, NULL, NULL, NULL ); data is a const char* pointing to the raw bytes of my RSA key (using i2d_RSAPrivateKey(), from OpenSSL) and datalen the size of data. My test private key doesn't have a passphrase so I decided not to use the callbacks for the time being. Has someone already done something similar ? Do you guys see anything that I could change/test to get forward on this problem ? I just discovered the library on yesterday, so I might miss something obvious here; I just can't see it. Thank you very much for your help.

    Read the article

  • how to display the decrypted and splited Strings in order?

    - by sebby_zml
    Hi everyone, i need some help and guidance in displaying the splitted Strings in order. let say, i have username, password, nonceInString. i had successfully encrypted and decrypted those. then i split the decrypted data. it was done, too. i want to display the decrypted data in order. something like this. userneme: sebastian password: harrypotter nonce value: sdgvay1saq3qsd5vc6dger9wqktue2tz* i tried the following code, but it didn't display like i wish to. pls help. thanks a lot in advance. String codeWord = username + ";" + password + ";" + nonceInString; String encryptedData = aesEncryptDecrypt.encrypt(codeWord); String decryptedData = aesEncryptDecrypt.decrypt(encryptedData); String[] splits = decryptedData.split(";"); String[] variables = {"username", "password", "nonce value"}; for (String data : variables){ for (String item : splits){ System.out.println( data + ": "+ item); } }

    Read the article

  • Help with creating a custom ItemRenderer for Flex 4

    - by elmonty
    I'm trying to create a custom ItemRenderer for a TileList in Flex 4. Here's my renderer: <?xml version="1.0" encoding="utf-8"?> <s:ItemRenderer xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" autoDrawBackground="true"> <mx:Image x="0" y="0" source="../images/blank-offer.png" width="160" height="144" smoothBitmapContent="true"/> <s:Label x="5" y="20" text="{data.title}" fontFamily="Verdana" fontSize="16" color="#696565" width="155"/> <s:Label x="5" y="42" text="{data.description}" fontFamily="Verdana" fontSize="8" color="#696565" width="154"/> <mx:Text x="3" y="59" text="{data.details}" fontFamily="Verdana" fontSize="8" color="#696565" width="157" height="65"/> <mx:Text x="3" y="122" text="{data.disclaimer}" fontFamily="Verdana" fontSize="5" color="#696565" width="157" height="21"/> </s:ItemRenderer> Here's my tile list: <mx:TileList x="0" y="0" width="100%" height="100%" id="tileList" creationComplete="tileList_creationCompleteHandler(event)" dataProvider="{getDataResult.lastResult}" labelField="title" itemRenderer="renderers.OfferLibraryListRenderer"></mx:TileList> When I run the app, I get this error: Error #1034: Type Coercion failed: cannot convert renderers::OfferLibraryListRenderer@32fce0a1 to mx.controls.listClasses.IListItemRenderer.

    Read the article

  • JSONP request using jquery $.getJSON not working on well formed JSON.

    - by Antti
    I'm not sure is it possible now from the url I am trying. Please see this url: http://www.heiaheia.com/voimakaksikko/stats.json It always serves the same padding function "voimakaksikkoStats". It is well formed JSON, but I have not been able to load it from remote server. Does it need some work from the server side or can it be loaded with javascript? I think the problems gotta to have something to with that callback function... JQuery is not requirement, but it would be nice. This (callback=voimakaksikkoStats) returns nothing (firebug - net - response), and alert doesn't fire: $.getJSON("http://www.heiaheia.com/voimakaksikko/stats.json?callback=voimakaksikkoStats", function(data){ alert(data); }) but this (callback=?): $.getJSON("http://www.heiaheia.com/voimakaksikko/stats.json?callback=?", function(data){ alert(data); }) returns: voimakaksikkoStats({"Top5Sports":[],"Top5Tests":{"8":"No-exercise ennuste","1":"Painoindeksi","2":"Vy\u00f6t\u00e4r\u00f6n ymp\u00e4rys","10":"Cooperin testi","4":"Etunojapunnerrus"},"Top5CitiesByTests":[],"Top5CitiesByExercises":[],"ExercisesLogged":0,"Top5CitiesByUsers":[""],"TestsTaken":22,"RegisteredUsers":1}); But I cannot access it... In both examples the alert never fires. Can someone help?

    Read the article

  • Reading from serial port with Boost Asio?

    - by trikri
    Hi! I'm going to check for incoming messages (data packages) on the serial port, using Boost Asio. Each message will start with a header that is one byte long, and will specify which type of the message has been sent. Each different type of message has an own length. The function I'm about to write should check for new incoming messages continually, and when it finds one it should read it, and then some other function should parse it. I thought that the code might look something like this: void check_for_incoming_messages() { boost::asio::streambuf response; boost::system::error_code error; std::string s1, s2; if (boost::asio::read(port, response, boost::asio::transfer_at_least(0), error)) { s1 = streambuf_to_string(response); int msg_code = s1[0]; if (msg_code < 0 || msg_code >= NUM_MESSAGES) { // Handle error, invalid message header } if (boost::asio::read(port, response, boost::asio::transfer_at_least(message_lengths[msg_code]-s1.length()), error)) { s2 = streambuf_to_string(response); // Handle the content of s1 and s2 } else if (error != boost::asio::error::eof) { throw boost::system::system_error(error); } } else if (error != boost::asio::error::eof) { throw boost::system::system_error(error); } } Is boost::asio::streambuf is the right thing to use? And how do I extract the data from it so I can parse the message? I also want to know if I need to have a separate thread which only calls this function, so that it get called more often? Isn't there a risk for loosing data in between two calls to the function otherwise, because so much data comes in that it can't be stored in the serial ports memory? I'm using Qt as a widget toolkit and I don't really know how long time it needs to process all it's events.

    Read the article

  • Linux Kernel - Red/Black Trees

    - by CodeRanger
    I'm trying to implement a red/black tree in Linux per task_struct using code from linux/rbtree.h. I can get a red/black tree inserting properly in a standalone space in the kernel such as a module but when I try to get the same code to function with the rb_root declared in either task_struct or task_struct-files_struct, I get a SEGFAULT everytime I try an insert. Here's some code: In task_struct I create a rb_root struct for my tree (not a pointer). In init_task.h, macro INIT_TASK(tsk), I set this equal to RB_ROOT. To do an insert, I use this code: rb_insert(&(current-fd_tree), &rbnode); This is where the issue occurs. My insert command is the standard insert that is documented in all RBTree documentation for the kernel: int my_insert(struct rb_root *root, struct mytype *data) { struct rb_node **new = &(root->rb_node), *parent = NULL; /* Figure out where to put new node */ while (*new) { struct mytype *this = container_of(*new, struct mytype, node); int result = strcmp(data->keystring, this->keystring); parent = *new; if (result < 0) new = &((*new)->rb_left); else if (result > 0) new = &((*new)->rb_right); else return FALSE; } /* Add new node and rebalance tree. */ rb_link_node(&data->node, parent, new); rb_insert_color(&data->node, root); return TRUE; } Is there something I'm missing? Some reason this would work fine if I made a tree root outside of task_struct? If I make rb_root inside of a module this insert works fine. But once I put the actual tree root in the task_struct or even in the task_struct-files_struct, I get a SEGFAULT. Can a root node not be added in these structs? Any tips are greatly appreciated. I've tried nearly everything I can think of.

    Read the article

  • jQuery does not return a JSON object to Firebug when JSON is generated by PHP

    - by Keyslinger
    The contents of test.json are: {"foo": "The quick brown fox jumps over the lazy dog.","bar": "ABCDEFG","baz": [52, 97]} When I use the following jQuery.ajax() call to process the static JSON inside test.json, $.ajax({ url: 'test.json', dataType: 'json', data: '', success: function(data) { $('.result').html('<p>' + data.foo + '</p>' + '<p>' + data.baz[1] + '</p>'); } }); I get a JSON object that I can browse in Firebug. However, when using the same ajax call with the URL pointing instead to this php script: <?php $arrCoords = array('foo'=>'The quick brown fox jumps over the lazy dog.','bar'=>'ABCDEFG','baz'=>array(52,97)); echo json_encode($arrCoords); ?> which prints this identical JSON object: {"foo":"The quick brown fox jumps over the lazy dog.","bar":"ABCDEFG","baz":[52,97]} I get the proper output in the browser but Firebug only reveals only HTML. There is no JSON tab present when I expand GET request in Firebug.

    Read the article

  • How Can I Generate Equivalent Output Using the CryptoAPI and the .NET Encryption (TripleDESCryptoSer

    - by S. Butts
    I have some C#/.NET code that encrypts and decrypts data using TripleDES encryption. It sticks to the sample code provided at MSDN pretty closely. The encryption piece looks like the following: TripleDESCryptoServiceProvider _desProvider = new TripleDESCryptoServiceProvider(); //bytes for key and initialization vector //keyBytes is 24 bytes of stuff, vectorBytes is 8 bytes of stuff byte[] keyBytes; byte[] vectorBytes; FileStream fStream = File.Open(locationOfFile, FileMode.Create, FileAccess.Write); CryptoStream cStream = new CryptoStream(fStream, _desProvider.CreateEncryptor(keyBytes, vectorBytes), CryptoStreamMode.Write); BinaryWriter bWriter = new BinaryWriter(cStream); //write out encrypted data //raw data is a few bytes of binary information byte[] rawData; bWriter.Write(rawData); With encrypting and decrypting in C#, this all works like a charm. The problem is I need to write a small Win32 utility that will duplicate the encryption above. I have tried several methods using the CryptoAPI, and I simply do not get output that the .NET piece can decrypt, no matter what I do. Can someone please tell me what the equivalent C++ code is that will produce the same output? I am not certain just what methods of the CryptoAPI the .NET functions use to encrypt the data. What options are used, and what method of generating the key is used? Before someone suggests that I just write it in C# anyway, or create some common library bridge for them, those options are unfortunately off the table. It really has to work in Win32 with .NET and without using a DLL. I have some leeway in changing the C# code. I apologize in advance if this is bone-headed, as I am new to encryption.

    Read the article

  • template engine implementation

    - by qwerty
    I am currently building this small template engine. It takes a string containing the template in parameter, and a dictionary of "tags,values" to fill in the template. In the engine, I have no idea of the tags that will be in the template and the ones that won't. I am currently iterating (foreach) on the dictionnary, parsing my string that I have put in a string builder, and replacing the tags in the template by their corresponding value. Is there a more efficient/convenient way of doing this? I know the main drawback here is that the stringbuilder is parsed everytime entirely for each tag, which is quite bad... (I am also checking, though not included in the sample, after the process that my template does not contain any tag anymore. They are all formated in the same way: @@tag@@) //Dictionary<string, string> tagsValueCorrespondence; //string template; StringBuilder outputBuilder = new StringBuilder(template); foreach (string tag in tagsValueCorrespondence.Keys) { outputBuilder.Replace(tag, tagsValueCorrespondence[tag]); } template = outputBuilder.ToString(); Responses: @Marc: string template = "Some @@foobar@@ text in a @@bar@@ template"; StringDictionary data = new StringDictionary(); data.Add("foo", "value1"); data.Add("bar", "value2"); data.Add("foo2bar", "value3"); Output: "Some text in a value2 template" instead of: "Some @@foobar@@ text in a value2 template"

    Read the article

  • how to use anonymous generic delegate in C# 2.0

    - by matti
    Hi. I have a class called NTree: class NTree<T> { public NTree(T data) { this.data = data; children = new List<NTree<T>>(); _stopTraverse = false; } ... public void Traverse(NTree<T> node, TreeVisitor<T> visitor) { try { _stopTraverse = false; Traverse(node, visitor); } finally { _stopTraverse = false; } } private void TraverseInternal(NTree<T> node, TreeVisitor<T> visitor) { if (_stopTraverse) return; if (!visitor(node.data)) { _stopTraverse = true; } foreach (NTree<T> kid in node.children) Traverse(kid, visitor); } When I try to use Traverse with anonymous delegate I get: Argument '2': cannot convert from 'anonymous method' to 'NisConverter.TreeVisitor' The code: tTable srcTable = new tTable(); DataRow[] rows; rootTree.Traverse(rootTree, delegate(TableRows tr) { if (tr.TableName == srcTable.mappingname) { rows = tr.Rows; return false; } }); This however produces no errors: static bool TableFinder<TableRows>(TableRows tr) { return true; } ... rootTree.Traverse(rootTree, TableFinder); I have tried to put "arrowhead-parenthisis" and everything to anonymous delegate but it just does not work. Please help me! Thanks & BR -Matti

    Read the article

  • Automation Error upon running VBA script in Excel

    - by brohjoe
    Hi guys, I'm getting an Automation error upon running VBA code in Excel 2007. I'm attempting to connect to a remote SQL Server DB and load data to from Excel to SQL Server. The error I get is, "Run-time error '-2147217843(80040e4d)': Automation error". I checked out the MSDN site and it suggested that this may be due to a bug associated with the sqloledb provider and one way to mitigate this is to use ODBC. Well I changed the connection string to reflect ODBC provider and associated parameters and I'm still getting the same error. Here is the code with ODBC as the provider: Dim cnt As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Dim wbBook As Workbook Dim wsSheet As Worksheet Dim rnStart As Range Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object. Set cnt = New ADODB.Connection cnt.ConnectionString = _ "Driver={SQL Server}; Server=onlineSQLServer2010.foo.com; Database=fooDB Uid=logonalready;Pwd='helpmeOB1';" cnt.Open On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0', & _ "'Data Source=C:\Database.xlsx; Extended Properties=Excel 12.0')...[SalesOrders$]" cnt.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the connection objects Set rst = Nothing Set cnt = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. cnt.Close If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set cnt = Nothing End If End Sub

    Read the article

  • Create a VPN with Python

    - by user213060
    I want to make a device "tunnel box" that you plug an input ethernet line, and an output ethernet line, and all the traffic that goes through it gets modified in a special way. This is similar to how a firewall, IDS, VPN, or similar boxes are connected inline in a network. I think you can just assume that I am writing a custom VPN in Python for the purpose of this question: LAN computer <--\ LAN computer <---> [LAN switch] <--> ["tunnel box"] <--> [internet modem] <--> LAN computer <--/ My question is, what is a good way to program this "tunnel box" from python? My application needs to see TCP flows at the network layer, not as individual ethernet frames. Non-TCP/IP traffic such as ICPM and other types should just be passed through. Example Twisted-like Code for my "tunnel box" tunnel appliance: from my_code import special_data_conversion_function class StreamInterceptor(twisted.Protocol): def dataReceived(self,data): data=special_data_conversion_function(data) self.outbound_connection.send(data) My initial guesses: TUN/TAP with twisted.pair.tuntap.py - Problem: This seems to only work at the ethernet frame level, not like my example? Socks proxy - Problem: Not transparent as in my diagram. Programs have to be specifically setup for it. Thanks!

    Read the article

  • populate UITableView from json

    - by mcgrailm
    Hello all working on my building my first iDevice app and I am trying to populate a UITableView with data from json result I can get it to load from plist array no propblem and I can even see my json the problem I'm having is that the UITableView never gets to see the json results please bare with me as this is first time with obj-c or anything like it in my .h file @interface TableViewViewController : UIViewController <UITableViewDataSource, UITableViewDelegate> { NSArray *exercises; NSMutableData *responseData; } in my .m file - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return exercises.count; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { //create a cell UITableViewCell *cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:@"cell"]; // fill it with contnets cell.textLabel.text = [exercises objectAtIndex:indexPath.row]; // return it return cell; } // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { responseData = [[NSMutableData data] retain]; NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:@"http://url_to_json"]]; [[NSURLConnection alloc] initWithRequest:request delegate:self ]; // load from plist //NSString *myfile = [[NSBundle mainBundle] pathForResource:@"exercise" ofType:@"plist"]; //exercises = [[NSArray alloc] initWithContentsOfFile:myfile]; [super viewDidLoad]; } - (void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { [responseData setLength:0]; } - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [responseData appendData:data]; } - (void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { NSLog(@"Connection failed: %@", [error description]); } - (void)connectionDidFinishLoading:(NSURLConnection *)connection { [connection release]; NSString *responseString = [[NSString alloc] initWithData:responseData encoding:NSUTF8StringEncoding]; [responseData release]; NSDictionary *dictionary = [responseString JSONValue]; NSArray *response = [dictionary objectForKey:@"response"]; exercises = [[NSArray alloc] initWithArray:response]; }

    Read the article

  • How to change identifier quote character in SSIS for connection to ODBC DSN

    - by William Rose
    I'm trying to create an SSIS 2008 Data Source View that reads from an Ingres database via the ODBC driver for Ingres. I've downloaded the Ingres 10 Community Edition to get the ODBC driver, installed it, set up the data access server and a DSN on the server running SSIS. If I connect to the SQL Server 2008 Database Engine on the server running SSIS, I can retrieve data from Ingres over the ODBC DSN by running the following command: SELECT * FROM OPENROWSET( 'MSDASQL' , 'DSN=IngresODBC;UID=testuser;PWD=testpass' , 'SELECT * FROM iitables') So I am quite sure that the ODBC setup is correct. If I try the same query with SQL Server style bracketed identifier quotes, I get an error, as Ingres doesn't support this syntax. SELECT * FROM OPENROWSET( 'MSDASQL' , 'DSN=IngresODBC;UID=testuser;PWD=testpass' , 'SELECT * FROM [iitables]') The error is "[Ingres][Ingres 10.0 ODBC Driver][Ingres 10.0]line 1, Unexpected character '['.". What I am finding is that I get the same error when I try to add tables from Ingres to an SSIS Data Source View. The initial step of selecting the ODBC Provider works fine, and I am shown a list of tables / views to add. I then select any table, and try to add it to the view, and get "ERROR [5000A] [Ingres][Ingres 10.0 ODBC Driver][Ingres 10.0]line 3, Unexpected character '['.". Following Ed Harper's suggestion of creating a named query also seems to be stymied. If I put into my named query the following text: SELECT * FROM "iitables" I still get an error: "ERROR [5000A] [Ingres][Ingres 10.0 ODBC Driver][Ingres 10.0]line 2, Unexpected character '['". According to the error, the query text passed by SSIS to ODBC was: SELECT [iitables].* FROM ( SELECT * FROM "iitables" ) AS [iitables] It seems that SSIS assumes that bracket quote characters are acceptable, when they aren't. How can I persuade it not to use them? Double quotes are acceptable.

    Read the article

  • NSKeyedArchiver on NSArray has large size overhead

    - by redguy
    I'm using NSKeyedArchiver in Mac OS X program which generates data for iPhone application. I found out that by default, resulting archives are much bigger than I expected. Example: NSMutableArray * ar = [NSMutableArray arrayWithCapacity:10]; for (int i = 0; i < 100000; i++) { NSString * s = [NSString stringWithFormat:@"item%06d", i]; [ar addObject:s]; } [NSKeyedArchiver archiveRootObject:ar toFile: @"NSKeyedArchiver.test"]; This stores 10 * 100000 = 1M bytes of useful data, yet the size of the resulting file is almost three megabytes. The overhead seems to be growing with number of items in the array. In this case, for 1000 items, the file was about 22k. "file" reports that it is a "Apple binary property list" (not the XML format). Is there an simple way to prevent this huge overhead? I wanted to use the NSKeyedArchiver for the simplicity it provides. I can write data to my own, non-generic, binary format, but that's not very elegant. Also, aggregating the data into large chunks and feeding these to the NSKeyedArchiver should work, but again, that kinda beats the point of using simple&easy&ready to use archiver. Am I missing some method call or usage pattern that would reduce this overhead?

    Read the article

  • Silverlight 4 race condition with DataGrid master details control

    - by Simon_Weaver
    Basically I want a DataGrid (master) and details (textbox), where DataGrid is disabled during edit of details (forcing people to save/cancel)... Here's what I have... I have a DataGrid which serves as my master data. <data:DataGrid IsEnabled="{Binding CanLoad,ElementName=dsReminders}" ItemsSource="{Binding Data, ElementName=dsReminders}" > Its data comes from a DomainDataSource: <riaControls:DomainDataSource Name="dsReminders" AutoLoad="True" ... I have a bound Textbox which is the 'details' (very simple right now). There are buttons (Save/Cancel) which should be enabled when user tries to edit the text. Unfortunately Silverlight doesn't support UpdateSourceTrigger=PropertyChanged so I have to raise an event: <TextBox Text="{Binding SelectedItem.AcknowledgedNote, Mode=TwoWay, UpdateSourceTrigger=Explicit, ElementName=gridReminders}" TextChanged="txtAcknowledgedNote_TextChanged"/> The event to handle this calls BindingExpression.UpdateSource to update the source immediately: private void txtAcknowledgedNote_TextChanged(object sender, TextChangedEventArgs e) { BindingExpression be = txtAcknowledgedNote.GetBindingExpression(TextBox.TextProperty); be.UpdateSource(); } IN other words - typing in the textbox causes CanLoad of the DomainDataSource to become False (because we're editing). This in turn disables the DataGrid (IsEnabled is bound to it) and enables 'Cancel' and 'Save' buttons. However I'm running up against a race condition if I move quickly through rows in the DataGrid (just clicking random rows). The TextChanged presumably is being called on the textbox and confusing the DomainDataSource which then thinks there's been a change. So how should I disable the DataGrid while editing without having the race condition? One obvious solution would be to use KeyDown events to trigger the call to UpdateSource but I always hate having to do that.

    Read the article

  • Add Hexidecimal Header Info to JPEG File Using Java

    - by jboyd
    I need to add header info to a JPEG file in order to get it to work properly when shared on some websites, I've tracked down the correct info through a lot of Hex digging, but now I'm kind of stuck trying to get it into the file. I know where in the file it needs to go, and I know how long it is, my problem is that RandomAccessFile just overwrites existing data in the file and FileOutputStream appends the data to the end. I don't want either, I want to INSERT data starting at the third byte. My example code: File fileToChange = new File("someimage.jpg"); byte[] i = new byte[2]; i[0] = (byte)Integer.decode("0xcc"); i[1] = (byte)Integer.decode("0xcc"); RandomAccessFile f = new RandomAccessFile(new File("videothing.jpg"), "rw"); long aPositionWhereIWantToGo = 2; f.seek(aPositionWhereIWantToGo); // this basically reads n bytes in the file f.write((byte[])i); f.close(); So this doesn't work because it overwrites, and does not insert, I can't find any way to just insert data into a file

    Read the article

  • NoSuchMethodError in Java using XStream

    - by Brad Germain
    I'm trying to save a database into a file using XStream and then open it again later using XStream and deserialize it back into the objects it was in previously. The database consists of an arraylist of tables, which consists of an arraylist of a data class where the data class contains an arraylist of objects. I'm basically trying to create an sql compiler. I'm currently getting a java.lang.NoSuchMethodError because of the last line in the load method. Here's what I have: Save Method public void save(Database DB){ File file = new File(DB.getName().toUpperCase() + ".xml"); //Test sample DB.createTable("TBL1(character(a));"); DB.tables.getTable("TBL1").rows.add(new DataList()); DB.tables.getTable("TBL1").rows.getRow(0).add(10); XStream xstream = new XStream(); //Database xstream.alias("Database", Database.class); //Tables xstream.alias("Table", Table.class); //Rows xstream.alias("Row", DataList.class); //Data //xstream.alias("Data", Object.class); //String xml = xstream.toXML(DB); Writer writer = null; try { writer = new FileWriter(file); writer.write(xstream.toXML(DB)); writer.close(); } catch (IOException e) { e.printStackTrace(); } } Load Method public void Load(String dbName){ XStream xstream = new XStream(); BufferedReader br; StringBuffer buff = null; try { br = new BufferedReader(new FileReader(dbName + ".xml")); buff = new StringBuffer(); String line; while((line = br.readLine()) != null){ buff.append(line); } } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } database = (Database)xstream.fromXML(buff.toString()); }

    Read the article

  • Product Catalog Schema design

    - by FlySwat
    I'm building a proof of concept schema for a product catalog to possibly replace a very aging and crufty one we use. In our business, we sell both physical materials and services (one time and reoccurring charges). The current catalog schema has each distinct category broken out into individual tables, while this is nicely normalized and performs well, it is fairly difficult to extend. Adding a new attribute to a particular product involves changing the table schema and backpopulating old data. An idea I've been toying with has been something along the line of a base set of entity tables in 3rd normal form, these will contain the facts that are common among ALL products. Then, I'd like to build an Attribute-Entity-Value schema that allows each entity type to be extended in a flexible way using just data and no schema changes. Finally, I'd like to denormalize this data model into materialized views for each individual entity type. This views are what the application would access. We also have many tables that contain business rules and compatibility rules. These would join against the base entity tables instead of the views. My big concerns here are: Performance - Attribute-Entity-Value schemas are flexible, but typically perform poorly, should I be concerned? More Performance - Denormalizing using materialized views may have some risks, I'm not positive on this yet. Complexity - While this schema is flexible and maintainable using just data, I worry that the complexity of the design might make future schema changes difficult. For those who have designed product catalogs for large scale enterprises, am I going down the totally wrong path? Is there any good best practice schema design reading available for product catalogs?

    Read the article

< Previous Page | 792 793 794 795 796 797 798 799 800 801 802 803  | Next Page >