Search Results

Search found 13388 results on 536 pages for 'certificate store'.

Page 456/536 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • PHP GD imagecreatefromstring discards transparency

    - by meustrus
    I've been trying to get transparency to work with my application (which dynamically resizes images before storing them) and I think I've finally narrowed down the problem after much misdirection about imagealphablending and imagesavealpha. The source image is never loaded with proper transparency! // With this line, the output image has no transparency (where it should be // transparent, colors bleed out randomly or it's completely black, depending // on the image) $img = imagecreatefromstring($fileData); // With this line, it works as expected. $img = imagecreatefrompng($fileName); // Blah blah blah, lots of image resize code into $img2 goes here; I finally // tried just outputting $img instead. header('Content-Type: image/png'); imagealphablending($img, FALSE); imagesavealpha($img, TRUE); imagepng($img); imagedestroy($img); It would be some serious architectural difficulty to load the image from a file; this code is being used with a JSON API that gets queried from an iPhone app, and it's easier in this case (and more consistent) to upload images as base64-encoded strings in the POST data. Do I absolutely need to somehow store the image as a file (just so that PHP can load it into memory again)? Is there maybe a way to create a Stream from $fileData that can be passed to imagecreatefrompng?

    Read the article

  • Possibility of a custom Contacts field with a set list of values and Contacts lookup performance

    - by areyling
    I'm pretty sure it's not viable to do what I'd like to based on some initial research, but I figured it couldn't hurt to ask the community of experts here in case someone knows a way. I'd like to create a custom field for contacts that the user is able to edit from the main Contacts app; however, the user should only be allowed to select from a list of four specific values. A short list of string values would be ideal, but an int with a min/max range would suffice. I'm interested in knowing if it's possible either way, but also wondering if it make sense to go this route performance wise. More specifically, would it be better to look up a contact (based on a phone number) each time a call or SMS message is received or better to store my own set of data (consisting of name, numbers, and the custom field) and just syncing contact info in a thread every so often? Or syncing contacts the first time the app is run and then registering for changes using ContentObserver? Here is a similar question with an answer that explains how to add a custom field to a contact. Thanks in advance.

    Read the article

  • retrieve value from hashtable with clone of key; C#

    - by Johnny
    I would like to know if there is any possible way to retrieve an item from a hashtable using a key that is identical to the actual key, but a different object. I understand why it is probably not possible, but I would like to see if there is any tricky way to do it. My problem arises from the fact that, being as stupid as I am, I created hashtables with int[] as the keys, with the integer arrays containing indices representing spatial position. I somehow knew that I needed to create a new int[] every time I wanted to add a new entry, but neglected to think that when I generated spatial coordinate arrays later they would be worthless in retrieving the values from my hashtables. Now I am trying to decide whether to rearrange things so that I can store my values in ArrayLists, or whether to search through the list of keys in the Hashtable for the one I need every time I want to get a value, neither of the options being very cool. Unless of course there is a way to get //1 to work like //2! Thanks in advance. static void Main(string[] args) { Hashtable dog = new Hashtable(); //1 int[] man = new int[] { 5 }; dog.Add(man, "hello"); int[] cat = new int[] { 5 }; Console.WriteLine(dog.ContainsKey(cat)); //false //2 int boy = 5; dog.Add(boy, "wtf"); int kitten = 5; Console.WriteLine(dog.ContainsKey(kitten)); //true; }

    Read the article

  • Django: How to dynamically add tag field to third party apps without touching app's source code

    - by Chris Lawlor
    Scenario: large project with many third party apps. Want to add tagging to those apps without having to modify the apps' source. My first thought was to first specify a list of models in settings.py (like ['appname.modelname',], and call django-tagging's register function on each of them. The register function adds a TagField and a custom manager to the specified model. The problem with that approach is that the function needs to run BEFORE the DB schema is generated. I tried running the register function directly in settings.py, but I need django.db.models.get_model to get the actual model reference from only a string, and I can't seem to import that from settings.py - no matter what I try I get an ImportError. The tagging.register function imports OK however. So I changed tactics and wrote a custom management command in an otherwise empty app. The problem there is that the only signal which hooks into syncdb is post_syncdb which is useless to me since it fires after the DB schema has been generated. The only other approach I can think of at the moment is to generate and run a 'south' like database schema migration. This seems more like a hack than a solution. This seems like it should be a pretty common need, but I haven't been able to find a clean solution. So my question is: Is it possible to dynamically add fields to a model BEFORE the schema is generated, but more specifically, is it possible to add tagging to a third party model without editing it's source. To clarify, I know it is possible to create and store Tags without having a TagField on the model, but there is a major flaw in that approach in that it is difficult to simultaneously create and tag a new model.

    Read the article

  • Best XML format for log events in terms of tool support for data mining and visualization?

    - by Thorbjørn Ravn Andersen
    We want to be able to create log files from our Java application which is suited for later processing by tools to help investigate bugs and gather performance statistics. Currently we use the traditional "log stuff which may or may not be flattened into text form and appended to a log file", but this works the best for small amounts of information read by a human. After careful consideration the best bet has been to store the log events as XML snippets in text files (which is then treated like any other log file), and then download them to the machine with the appropriate tool for post processing. I'd like to use as widely supported an XML format as possible, and right now I am in the "research-then-make-decision" phase. I'd appreciate any help both in terms of XML format and tools and I'd be happy to write glue code to get what I need. What I've found so far: log4j XML format: Supported by chainsaw and Vigilog. Lilith XML format: Supported by Lilith Uninvestigated tools: Microsoft Log Parser: Apparently supports XML. OS X log viewer: plus there is a lot of tools on http://www.loganalysis.org/sections/parsing/generic-log-parsers/ Any suggestions?

    Read the article

  • Looping through an List to select an Element in Selenium

    - by ChrisMcLellan
    I'm attempting to write a Page Class for Links within the Header of the website I'm testing. I have the following link structure below <ul> <li><a href="/" title="Home">Home</a></li> <li><a href="/AboutUs" title="About Us">About Us</a> </li> <li><a href="/Account" title="Account">Account</a></li> <li><a href="/Account/Orders" title="Orders">Orders</a></li> <li><a href="/AdministrationPortal" title="Administration Portal">Administration Portal</a></li> </ul> What I want to do is store these into a List, then when a user select one of the links, it will take then to the page they should go to. I have started with the following code below List<IWebElement> headerElements = new List<IWebElement>(); headerElements.Add(WebDriver.FindElement(By.LinkText("Home"))); headerElements.Add(WebDriver.FindElement(By.LinkText("About Us"))); headerElements.Add(WebDriver.FindElement(By.LinkText("Account"))); headerElements.Add(WebDriver.FindElement(By.LinkText("Orders"))); headerElements.Add(WebDriver.FindElement(By.LinkText("Administration Portal"))); headerElements.Add(WebDriver.FindElement(By.LinkText("Log in / Register"))); headerElements.Add(WebDriver.FindElement(By.LinkText("Log off"))); I was thinking for using a for loop to do this, would this be the best way. I'm trying to avoid writting methods like the one below for each link public void SelectCreateNewReferralLink() { var selectAboutUsLink = ( new WebDriverWait(WebDriver, new TimeSpan(50))).Until (ExpectedConditions.ElementExists(By.CssSelector("#main > a:nth-of-type(1)"))); selectCreateNewReferralLink.Click(); } I'm using C#, with WebDriver attempting to write this Any Help would be great Thanks Chris

    Read the article

  • Parsing a JSON feed from YQL using jQuery

    - by Keith
    I am using YQL's query.multi to grab multiple feeds so I can parse a single JSON feed with jQuery and reduce the number of connections I'm making. In order to parse a single feed, I need to be able to check the type of result (photo, item, entry, etc) so I can pull out items in specific ways. Because of the way the items are nested within the JSON feed, I'm not sure the best way to loop through the results and check the type and then loop through the items to display them. Here is a YQL (http://developer.yahoo.com/yql/console/) query.multi example and you can see three different result types (entry, photo, and item) and then the items nested within them: select * from query.multi where queries= "select * from twitter.user.timeline where id='twitter'; select * from flickr.photos.search where has_geo='true' and text='san francisco'; select * from delicious.feeds.popular" or here is the JSON feed itself: http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20query.multi%20where%20queries%3D%22select%20*%20from%20flickr.photos.search%20where%20user_id%3D'23433895%40N00'%3Bselect%20*%20from%20delicious.feeds%20where%20username%3D'keith.muth'%3Bselect%20*%20from%20twitter.user.timeline%20where%20id%3D'keithmuth'%22&format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys&callback=

    Read the article

  • smallest mysql type that accomodates single decimal

    - by donpal
    Database newbie here. I'm setting up a mysql table. One of the fields will accept a value in increment of a 0.5. e.g. 0.5, 1.0, 1.5, 2.0, .... 200.5, etc. I've tried int but it doesn't capture the decimals. `value` int(10), What would be the smallest type that can accommodate this value, considering it's only a single decimal. I also was considering that because the decimal will always be 0.5 if at all, I could store it in a separate boolean field? So I would have 2 fields instead. Is this a stupid or somewhat over complicated idea? I don't know if it really saves me any memory, and it might get slower now that I'm accessing 2 fields instead of 1 `value` int(10), `half` bool, //or something similar to boolean What are your suggestions guys? Is the first option better, and what's the smallest data type in that case that would get me the 0.5?

    Read the article

  • Creating futures using Apple's GCD

    - by jer
    I'm working on a library which implements the actor model on top of Grand Central Dispatch (specifically the C level API libdispatch). Basically a brief overview of my system is as such: Communication happens between actors using messages Multicast communication only (one actor to many actors) Senders and receivers are decoupled from one another using a blackboard where messages are pushed to. Messages are sent in the default queue asynchronously using dispatch_group_async() once a message gets pushed onto the blackboard. I'm trying to implement futures in the language right now, so I've created a new type which holds some information: A group of its own The value being 'returned' However, I have a problem since dispatch_block_t is of type void (^)(void) so it doesn't return anything. So my idea of in my future_new() function of setting up another group which can be used to execute a block returning a result, which I can store in my "value" member in my future_t structure, isn't going to work. The rest of the futures implementation is very clear, except it all depends on being able to get the value into the future back from the actor, acting on the message. When using the library, it would greatly reduce its usefulness if I had to ask users (and myself) to be aware when futures were going to be used by other parts of the system—It just isn't practical. I'm wondering if anyone can think of a way around this?

    Read the article

  • Optimize LINQ Query for use with jQuery Autocomplete

    - by rockinthesixstring
    I'm working on building an HTTPHandler that will serve up plain text for use with jQuery Autocomplete. I have it working now except for when I insert the first bit of text it does not take me to the right portion of the alphabet. Example: If I enter Ne my drop down returns Nlabama Arkansas Notice the "N" from Ne and the "labama" from "Alabama" As I type the third character New, then the jQuery returns the "N" section of the results. My current code looks like this Public Sub ProcessRequest(ByVal context As System.Web.HttpContext) Implements System.Web.IHttpHandler.ProcessRequest ' the page contenttype is plain text' HttpContext.Current.Response.ContentType = "text/plain" ' store the querystring as a variable' Dim qs As Nullable(Of Integer) = Integer.TryParse(HttpContext.Current.Request.QueryString("ID"), Nothing) ' use the RegionsDataContext' Using RegionDC As New DAL.RegionsDataContext 'create a (q)uery variable' Dim q As Object ' if the querystring PID is not blank' ' then we want to return results based on the PID' If Not qs Is Nothing Then ' that fit within the Parent ID' q = (From r In RegionDC.bt_Regions _ Where r.PID = qs _ Select r.Region).ToArray ' now we loop through the array' ' and write out the ressults' For Each item In q HttpContext.Current.Response.Write(item & vbCrLf) Next End If End Using End Sub So where I'm at now is the fact that I stumbled on the "Part" portion of the Autocomplete method whereby I should only return information that is contained within the Part. My question is, how would I implement this concept into my HTTPHandler without doing a fresh SQLQuery on every character change? IE: I do the SQL Query on the QueryString("ID"), and then on every subsequent load of the same ID, we just filter down the "Part". http://www.example.com/ReturnRegions.axd?ID=[someID]&Part=[string]

    Read the article

  • Xcode "Build and Archive" from command line

    - by Dan Fabulich
    Xcode 3.2 provides an awesome new feature under the Build menu, "Build and Archive" which generates an .ipa file suitable for Ad Hoc distribution. You can also open the Organizer, go to "Archived Applications," and "Submit Application to iTunesConnect." Is there a way to use "Build and Archive" from the command line (as part of a build script)? I'd assume that xcodebuild would be involved somehow, but the man page doesn't seem to say anything about this. UPDATE Michael Grinich requested clarification; here's what exactly you can't do with command-line builds, features you can ONLY do with Xcode's Organizer after you "Build and Archive." You can click "Share Application..." to share your IPA with beta testers. As Guillaume points out below, due to some Xcode magic, this IPA file does not require a separately distributed .mobileprovision file that beta testers need to install; that's magical. No command-line script can do it. For example, Arrix's script (submitted May 1) does not meet that requirement. More importantly, after you've beta tested a build, you can click "Submit Application to iTunes Connect" to submit that EXACT same build to Apple, the very binary you tested, without rebuilding it. That's impossible from the command line, because signing the app is part of the build process; you can sign bits for Ad Hoc beta testing OR you can sign them for submission to the App Store, but not both. No IPA built on the command-line can be beta tested on phones and then submitted directly to Apple. I'd love for someone to come along and prove me wrong: both of these features work great in the Xcode GUI and cannot be replicated from the command line.

    Read the article

  • Google Chrome and (cache or memory leaks).

    - by Alexey Ogarkov
    Hello All, I have a big problem with Google Chrome and its memory. My app is displaying to user several image charts and reloads them every 10s. In the interval i have code like that var image = new Image(); var src = 'myurl/image'+new Date().getTime(); image.onload = function() { document.getElementById('myimage').src = src; image.onload = image.onabort = image.onerror = null; } image.src = src; So i have no memory leaks in Firefox and IE. Here the response headers for images Server Apache-Coyote/1.1 Vary * Cache-Control no-store (// I try no-cache, must-revalidate and so on here) Content-Type image/png Content-Length 11131 Date Mon, 31 May 2010 14:00:28 GMT Vary * taken from here In about:cache page there is no my cached images. If i enable purge-memory-button for chrome (--purge-memory-button parameter) it`s not help. Images is in PNG24. So i think that the problem is not in cache. May be Google Chrome is not releasing memory for old images. Please help. Any suggestions. Thanks.

    Read the article

  • Trouble with an depreciated constructor visual basic visual studio 2010

    - by VBPRIML
    My goal is to print labels with barcodes and a date stamp from an entry to a zebra TLP 2844 when the user clicks the ok button/hits enter. i found what i think might be the code for this from zebras site and have been integrating it into my program but part of it is depreciated and i cant quite figure out how to update it. below is what i have so far. The printer is attached via USB and the program will also store the entered numbers in a database but i have that part done. any help would be greatly Appreciated.   Public Class ScanForm      Inherits System.Windows.Forms.Form    Public Const GENERIC_WRITE = &H40000000    Public Const OPEN_EXISTING = 3    Public Const FILE_SHARE_WRITE = &H2      Dim LPTPORT As String    Dim hPort As Integer      Public Declare Function CreateFile Lib "kernel32" Alias "CreateFileA" (ByVal lpFileName As String,                                                                           ByVal dwDesiredAccess As Integer,                                                                           ByVal dwShareMode As Integer, <MarshalAs(UnmanagedType.Struct)> ByRef lpSecurityAttributes As SECURITY_ATTRIBUTES,                                                                           ByVal dwCreationDisposition As Integer, ByVal dwFlagsAndAttributes As Integer,                                                                           ByVal hTemplateFile As Integer) As Integer          Public Declare Function CloseHandle Lib "kernel32" Alias "CloseHandle" (ByVal hObject As Integer) As Integer      Dim retval As Integer           <StructLayout(LayoutKind.Sequential)> Public Structure SECURITY_ATTRIBUTES          Private nLength As Integer        Private lpSecurityDescriptor As Integer        Private bInheritHandle As Integer      End Structure            Private Sub OKButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles OKButton.Click          Dim TrNum        Dim TrDate        Dim SA As SECURITY_ATTRIBUTES        Dim outFile As FileStream, hPortP As IntPtr          LPTPORT = "USB001"        TrNum = Me.ScannedBarcodeText.Text()        TrDate = Now()          hPort = CreateFile(LPTPORT, GENERIC_WRITE, FILE_SHARE_WRITE, SA, OPEN_EXISTING, 0, 0)          hPortP = New IntPtr(hPort) 'convert Integer to IntPtr          outFile = New FileStream(hPortP, FileAccess.Write) 'Create FileStream using Handle        Dim fileWriter As New StreamWriter(outFile)          fileWriter.WriteLine(" ")        fileWriter.WriteLine("N")        fileWriter.Write("A50,50,0,4,1,1,N,")        fileWriter.Write(Chr(34))        fileWriter.Write(TrNum) 'prints the tracking number variable        fileWriter.Write(Chr(34))        fileWriter.Write(Chr(13))        fileWriter.Write(Chr(10))        fileWriter.Write("A50,100,0,4,1,1,N,")        fileWriter.Write(Chr(34))        fileWriter.Write(TrDate) 'prints the date variable        fileWriter.Write(Chr(34))        fileWriter.Write(Chr(13))        fileWriter.Write(Chr(10))        fileWriter.WriteLine("P1")        fileWriter.Flush()        fileWriter.Close()        outFile.Close()        retval = CloseHandle(hPort)          'Add entry to database        Using connection As New SqlClient.SqlConnection("Data Source=MNGD-LABS-APP02;Initial Catalog=ScannedDB;Integrated Security=True;Pooling=False;Encrypt=False"), _        cmd As New SqlClient.SqlCommand("INSERT INTO [ScannedDBTable] (TrackingNumber, Date) VALUES (@TrackingNumber, @Date)", connection)            cmd.Parameters.Add("@TrackingNumber", SqlDbType.VarChar, 50).Value = TrNum            cmd.Parameters.Add("@Date", SqlDbType.DateTime, 8).Value = TrDate            connection.Open()            cmd.ExecuteNonQuery()            connection.Close()        End Using          'Prepare data for next entry        ScannedBarcodeText.Clear()        Me.ScannedBarcodeText.Focus()      End Sub

    Read the article

  • URL equals and checking Internet access

    - by James P.
    On http://java.sun.com/j2se/1.5.0/docs/api/java/net/URL.html it states that: Compares this URL for equality with another object. If the given object is not a URL then this method immediately returns false. Two URL objects are equal if they have the same protocol, reference equivalent hosts, have the same port number on the host, and the same file and fragment of the file. Two hosts are considered equivalent if both host names can be resolved into the same IP addresses; else if either host name can't be resolved, the host names must be equal without regard to case; or both host names equal to null. Since hosts comparison requires name resolution, this operation is a blocking operation. Note: The defined behavior for equals is known to be inconsistent with virtual hosting in HTTP. According to this, equals will only work if name resolution is possible. Since I can't be sure that a computer has internet access at a given time, should I just use Strings to store addresses instead? Also, how do I go about testing if access is available when requested?

    Read the article

  • How many custom tabs can I add to a Facebook page?

    - by Maxi Ferreira
    I'm building a web application to create custom tabs and add them to the user's Facebook fanpages. I know how the process of "installing" FB apps into FB pages so they show up as Page Tabs works, but the problem is the client wants to allow the user to create unlimited Page Tabs for a single FB Page. So I have basically two questions. 1 - Can I resue a single FB App to be included into the same page several times? If so, is there a way to know what is the "id" of that Page Tab? So, if I have my FB App to look for the Tab content in http://www.mywebapp.com/tab/, I know I get a signed_request with the App ID and the Page ID, but if that same App is installed several times into the same Page, I don't know what's the Tab the user have cliked on. I know it's a little big messy, and I don't think there's a way to do this. So my next question is probably more accurate. 2 - Is there a limit on how many Tabs can I add to a single Facebook page? This way, if there's a limit of, say, 12 Tabs, I can create 12 FB Tab Apps, store the ID's and then I know which Tab of which Page the user is currently viewing. Thanks in advice! Maxi

    Read the article

  • How to implement generic callbacks in C++

    - by Kylotan
    Forgive my ignorance in asking this basic question but I've become so used to using Python where this sort of thing is trivial that I've completely forgotten how I would attempt this in C++. I want to be able to pass a callback to a function that performs a slow process in the background, and have it called later when the process is complete. This callback could be a free function, a static function, or a member function. I'd also like to be able to inject some arbitrary arguments in there for context. (ie. Implementing a very poor man's coroutine, in a way.) On top of that, this function will always take a std::string, which is the output of the process. I don't mind if the position of this argument in the final callback parameter list is fixed. I get the feeling that the answer will involve boost::bind and boost::function but I can't work out the precise invocations that would be necessary in order to create arbitrary callables (while currying them to just take a single string), store them in the background process, and invoke the callable correctly with the string parameter.

    Read the article

  • how to generate unique numbers less than 8 characters long.

    - by loudiyimo
    hi I want to generate unique id's everytime i call methode generateCustumerId(). The generated id must be 8 characters long or less than 8 characters. This requirement is necessary because I need to store it in a data file and schema is determined to be 8 characters long for this id. Option 1 works fine. Instead of option 1, I want to use UUID. The problem is that UUID generates an id which has to many characters. Does someone know how to generate a unique id which is less then 99999999? option 1 import java.util.HashSet; import java.util.Random; import java.util.Set; public class CustomerIdGenerator { private static Set<String> customerIds = new HashSet<String>(); private static Random random = new Random(); // XXX: replace with java.util.UUID public static String generateCustumerId() { String customerId = null; while (customerId == null || customerIds.contains(customerId)) { customerId = String.valueOf(random.nextInt(89999999) + 10000000); } customerIds.add(customerId); return customerId; } } option2 generates an unique id which is too long public static String generateCustumerId() { String ownerId = UUID.randomUUID().toString(); System.out.println("ownerId " + ownerId); return ownerId }

    Read the article

  • gen_server with a dict vs mnesia table vs ets

    - by pablo
    Hi, I'm building an erlang server. Users sends http requests to the server to update their status. The http request process on the server saves the user status message in memory. Every minute the server sends all messages to a remote server and clear the memory. If a user update his status several times in a minute, the last message overrides the previous one. It is important that between reading all the messages and clearing them no other process will be able to write a status message. What is the best way to implement it? gen_server with a dict. The key will be the userid. dict:store/3 will update or create the status. The gen_server solves the 'transaction' issue. mnesia table with ram_copies. Handle transactions and I don't need to implement a gen_server. Is there too much overhead with this solution? ETS table which is more light weight and have a gen_server. Is it possible to do the transaction in ETS? To lock the table between reading all the messages and clearing them? Thanks

    Read the article

  • Tail recursion and memoization with C#

    - by Jay
    I'm writing a function that finds the full path of a directory based on a database table of entries. Each record contains a key, the directory's name, and the key of the parent directory (it's the Directory table in an MSI if you're familiar). I had an iterative solution, but it started looking a little nasty. I thought I could write an elegant tail recursive solution, but I'm not sure anymore. I'll show you my code and then explain the issues I'm facing. Dictionary<string, string> m_directoryKeyToFullPathDictionary = new Dictionary<string, string>(); ... private string ExpandDirectoryKey(Database database, string directoryKey) { // check for terminating condition string fullPath; if (m_directoryKeyToFullPathDictionary.TryGetValue(directoryKey, out fullPath)) { return fullPath; } // inductive step Record record = ExecuteQuery(database, "SELECT DefaultDir, Directory_Parent FROM Directory where Directory.Directory='{0}'", directoryKey); // null check string directoryName = record.GetString("DefaultDir"); string parentDirectoryKey = record.GetString("Directory_Parent"); return Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey), directoryName); } This is how the code looked when I realized I had a problem (with some minor validation/massaging removed). I want to use memoization to short circuit whenever possible, but that requires me to make a function call to the dictionary to store the output of the recursive ExpandDirectoryKey call. I realize that I also have a Path.Combine call there, but I think that can be circumvented with a ... + Path.DirectorySeparatorChar + .... I thought about using a helper method that would memoize the directory and return the value so that I could call it like this at the end of the function above: return MemoizeHelper( m_directoryKeyToFullPathDictionary, Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey)), directoryName); But I feel like that's cheating and not going to be optimized as tail recursion. Any ideas? Should I be using a completely different strategy? This doesn't need to be a super efficient algorithm at all, I'm just really curious. I'm using .NET 4.0, btw. Thanks!

    Read the article

  • Google app engine: Poor Performance with JDO + Datastore

    - by Bosh
    I have a simple data model that includes USERS: store basic information (key, name, phone # etc) RELATIONS: describe, e.g. a friendship between two users (supplying a relationship_type + two user keys) I'm getting very poor performance, for instance, if I try to print the first names of all of a user's friends. Say the user has 500 friends: I can fetch the list of friend user_ids very easily in a single query. But then, to pull out first names, I have to do 500 back-and-forth trips to the Datastore, each of which seems to take on the order of 30 ms. If this were SQL, I'd just do a JOIN and get the answer out fast. I understand there are rudimentary facilities for performing joins across un-owned relations in a relaxed implementation of JDO (as described at http://gae-java-persistence.blogspot.com) but they sound experimental and non-standard (e.g. my code won't work in any other JDO implementation). Is this really my best bet? Otherwise, how do people extract satisfactory performance from JDO/Datastore in this kind of (very common) situation? -Bosh

    Read the article

  • SQLite Databases and Grid Hosting

    - by jocull
    I'm considering moving my site from a GoDaddy shared hosting account to a Media Temple grid hosting account in anticipation of traffic. However, I first have some concerns with the grid hosting setup. My site stores a large personal set of data on a per-user basis (possibly 3-4MB per user). At this rate I was worried about blowing over a 1GB MySQL limit in no time. To deal with this I created distributed SQLite databases per user to store large data objects. It's worked wonderfully so far. SQLite is super fast and simple. I know that reading from and writing to files is different in a Grid Hosting environment. I need to know if this setup is going to cause serious problems. These databases are not (and will not be) highly trafficked. They are personal to the user and will only be touched maybe two locations at the same time (one updating the data hourly at the most, and one or more reading on demand). I'd like to keep this setup as getting additional space (beyond 4GB) on a MySQL database seems to be a real trouble point. Will Grid Hosting cause me serious problems? Thanks.

    Read the article

  • using dummy row with NOT NULL to solve DEFAULT NULL

    - by Tony38
    I know having DEFAULT NULLS is not a good practice but I have many optional lookup values which are FK in the system so to solve this issue here is what i am doing: I use NOT NULL for every FK / lookup colunms. I have the first row in every lookup table which is PK id = 1 as a dummy row with just "none" in all the columns. This way I can use NOT NULL in my schema and if needed reference to the none row values PK =1 for FKs which do not have any lookup value. Is this a good design or any other work arounds? EDIT: I have: Neighborhood table Postal table. Every neighborhood has a city, so the FK can be NOT NULL. But not every postal code belongs to a neighborhood. Some do, some don't depending on the country. So if i use NOT NULL for the FK between postal and neighborhood then I will be screwed as there has to be some value entered. So what i am doing in essence is: have a row in every table to be a dummy row just to link the FKs. This way row one in neighborhood table will be: n_id = 1 name =none etc... In postal table I can have: postal_code = 3456A3 FK (city) = Moscow FK (neighborhood_id)=1 as a NOT NULL. If I don't have a dummy row in the neighborhood lookup table then I have to declare FK (neighborhood_id) as a Default null column and store blanks in the table. This is an example but there is a huge number of values which will have blanks then in many tables.

    Read the article

  • Silverlight Socket Constantly Returns With Empty Buffer

    - by Benny
    I am using Silverlight to interact with a proxy application that I have developed but, without the proxy sending a message to the Silverlight application, it executes the receive completed handler with an empty buffer ('\0's). Is there something I'm doing wrong? It is causing a major memory leak. this._rawBuffer = new Byte[this.BUFFER_SIZE]; SocketAsyncEventArgs receiveArgs = new SocketAsyncEventArgs(); receiveArgs.SetBuffer(_rawBuffer, 0, _rawBuffer.Length); receiveArgs.Completed += new EventHandler<SocketAsyncEventArgs>(ReceiveComplete); this._client.ReceiveAsync(receiveArgs); if (args.SocketError == SocketError.Success && args.LastOperation == SocketAsyncOperation.Receive) { // Read the current bytes from the stream buffer int bytesRecieved = this._client.ReceiveBufferSize; // If there are bytes to process else the connection is lost if (bytesRecieved > 0) { try { //Find out what we just received string messagePart = UTF8Encoding.UTF8.GetString(_rawBuffer, 0, _rawBuffer.GetLength(0)); //Take out any trailing empty characters from the message messagePart = messagePart.Replace('\0'.ToString(), ""); //Concatenate our current message with any leftovers from previous receipts string fullMessage = _theRest + messagePart; int seperator; //While the index of the seperator (LINE_END defined & initiated as private member) while ((seperator = fullMessage.IndexOf((char)Messages.MessageSeperator.Terminator)) > 0) { //Pull out the first message available (up to the seperator index string message = fullMessage.Substring(0, seperator); //Queue up our new message _messageQueue.Enqueue(message); //Take out our line end character fullMessage = fullMessage.Remove(0, seperator + 1); } //Save whatever was NOT a full message to the private variable used to store the rest _theRest = fullMessage; //Empty the queue of messages if there are any while (this._messageQueue.Count > 0) { ... } } catch (Exception e) { throw e; } // Wait for a new message if (this._isClosing != true) Receive(); } } Thanks in advance.

    Read the article

  • C++: is it safe to work with std::vectors as if they were arrays?

    - by peoro
    I need to have a fixed-size array of elements and to call on them functions that require to know about how they're placed in memory, in particular: functions like glVertexPointer, that needs to know where the vertices are, how distant they are one from the other and so on. In my case vertices would be members of the elements to store. to get the index of an element within this array, I'd prefer to avoid having an index field within my elements, but would rather play with pointers arithmetic (ie: index of Element *x will be x - & array[0]) -- btw, this sounds dirty to me: is it good practice or should I do something else? Is it safe to use std::vector for this? Something makes me think that an std::array would be more appropriate but: Constructor and destructor for my structure will be rarely called: I don't mind about such overhead. I'm going to set the std::vector capacity to size I need (the size that would use for an std::array, thus won't take any overhead due to sporadic reallocation. I don't mind a little space overhead for std::vector's internal structure. I could use the ability to resize the vector (or better: to have a size chosen during setup), and I think there's no way to do this with std::array, since its size is a template parameter (that's too bad: I could do that even with an old C-like array, just dynamically allocating it on the heap). If std::vector is fine for my purpose I'd like to know into details if it will have some runtime overhead with respect to std::array (or to a plain C array): I know that it'll call the default constructor for any element once I increase its size (but I guess this won't cost anything if my data has got an empty default constructor?), same for destructor. Anything else?

    Read the article

  • Multiple inequality conditions (range queries) in NoSQL

    - by pableu
    Hi, I have an application where I'd like to use a NoSQL database, but I still want to do range queries over two different properties, for example select all entries between times T1 and T2 where the noiselevel is smaller than X. On the other hand, I would like to use a NoSQL/Key-Value store because my data is very sparse and diverse, and I do not want to create new tables for every new datatype that I might come across. I know that you cannot use multiple inequality filters for the Google Datastore (source). I also know that this feature is coming (according to this). I know that this is also not possible in CouchDB (source). I think I also more or less understand why this is the case. Now, this makes me wonder.. Is that the case with all NoSQL databases? Can other NoSQL systems make range queries over two different properties? How about, for example, Mongo DB? I've looked in the Documentation, but the only thing I've found was the following snippet in their docu: Note that any of the operators on this page can be combined in the same query document. For example, to find all document where j is not equal to 3 and k is greater than 10, you'd query like so: db.things.find({j: {$ne: 3}, k: {$gt: 10} }); So they use greater-than and not-equal on two different properties. They don't say anything about two inequalities ;-) Any input and enlightenment is welcome :-)

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >