Search Results

Search found 4357 results on 175 pages for 'retrieve'.

Page 138/175 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Convert ADO.Net EF Connection String To Be SQL Azure Cloud Connection String Compatible!?

    - by Goober
    The Scenario I have written a Silverlight 3 Application that uses an SQL Server database. I'm moving the application onto the Cloud (Azure Platform). In order to do this I have had to setup my database on SQL Azure. I am using the ADO.Net Entity Framework to model my database. I have got the application running on the cloud, but I cannot get it to connect to the database. Below is the original localhost connection string, followed by the SQL Azure connection string that isn't working. The application itself runs fine, but fails when trying to retrieve data. The Original Localhost Connection String <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl; provider=System.Data.SqlClient; provider connection string=&quot; Data Source=localhost; Initial Catalog=InmarsatZenith; Integrated Security=True; MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> The Converted SQL Azure Connection String <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl; provider=System.Data.SqlClient; provider connection string=&quot; Server=tcp:MYSERVER.ctp.database.windows.net; Database=InmarsatZenith; UserID=MYUSERID;Password=MYPASSWORD; Trusted_Connection=False; MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> The Question Anyone know if this connection string for SQL Azure is correct? Help greatly appreciated.

    Read the article

  • When optimizing database queries, what exactly is the relationship between number of queries and siz

    - by williamjones
    To optimize application speed, everyone always advises to minimize the number of queries an application makes to the database, consolidating them into fewer queries that retrieve more wherever possible. However, this also always comes with the caution that data transferred is still data transferred, and just because you are making fewer queries doesn't make the data transferred free. I'm in a situation where I can over-include on the query in order to cut down the number of queries, and simply remove the unwanted data in the application code. Is there any type of a rule of thumb on how much of a cost there is to each query, to know when to optimize number of queries versus size of queries? I've tried to Google for objective performance analysis data, but surprisingly haven't been able to find anything like that. Clearly this relationship will change for factors such as when the database grows in size, making this somewhat individualized, but surely this is not so individualized that a broad sense of the landscape can't be drawn out? I'm looking for general answers, but for what it's worth, I'm running an application on Heroku.com, which means Ruby on Rails with a Postgres database.

    Read the article

  • JPA - Performance with using multiple entity manager

    - by Nguyen Tuan Linh
    My situation is: The code is not mine I have two kinds of database: one is Dad, one is Son. In Dad, I have a table to store JNDI name. I will look up Dad using JNDI, create entity manager, and retrieve this table. From these retrieved JNDI names, I will create multiple entity managers using multiple Son databases. The problem is: Son have thousands of entities. It takes each Son database around 10 minutes to load all entities. If there is 4 Son databases, it will be 40 minutes. My question: Is there any way to load all entities and use them for all entity manager? Please look at the code below For each Son JNDI: Map<String, String> puSonProperties = new HashMap<String, String>(); puSonProperties.put("javax.persistence.jtaDataSource", sonJndi); EntityManagerFactory emf = Persistence.createEntityManagerFactory("PUSon", puSonProperties); PUSon - All of them use the same persistence unit log.info("Verify entity manager for son: {0} - {1}", sonCode, emSon.find(Son_configuration.class, 0) != null ? "ok" : "failed!"); This is the actual code where the loading of all entities begins. 10 mins.

    Read the article

  • How to implement Session timeout in Web Server Side?

    - by Morgan Cheng
    I beheld a web framework implementing in-memory session in this way. The session object is added to Cache with timeout. When the time is out, the session is removed from Cache automatically. To protect race condition, each request should acquire lock on given session object to proceed. Each request will "touch" the session in Cache to refresh the timeout. Everything looks fine, until this scenario is discovered. Say, one operation takes a long time, longer than timeout. Another request comes and wait on session lock which is currently hold by the long-time request. Finally, the long-time request is over, it releases the lock. But, since it already takes longer time than timeout, the session object is already removed from Cache. This is obvious because the only request holding the lock doesn't have a chance to "touch" the session object in cache. The second request gets the lock but cannot retrieve the expired Session object. Oops... To fix this issue, the second request has to re-create the Session object. But, this is just like digging a buried dead body from tomb and try to bring it back to life. It causes buggy code. I'm wondering what's the best way to implement timeout in session to handle such scenario. I know that current platform must have good session mechanism. I just want to know the under-the-hood how.

    Read the article

  • How can i Execute a Controller's ActionMethod programatically?

    - by Pure.Krome
    Hi folks, I'm trying to execute a controller's Action Method programatically and I'm not sure how. Scenario: When my ControllerFactory fails to find the controller, I wish it to manually execute a single action method which i have on a simple, custom controller. I don't want to rely on using any route data to determine the controller/method .. because that route might not have been wired up. Eg. // NOTE: Error handling removed from this example. public class MyControllerFactory : DefaultControllerFactory { protected override IController GetControllerInstance(Type controllerType) { IController result = null; // Try and load the controller, based on the controllerType argument. // ... snip // Did we retrieve a controller? if (controller == null) { result = new MyCustomController(); ((MyCustomController)result).Execute404NotFound(); // <-- HERE! } return result; } } .. and that method is .. public static void Execute404NotFound(this Controller controller) { result = new 404NotFound(); // Setup any ViewData.Model stuff. result.ExecuteResult(controller.ControllerContext); // <-- RUNTIME // ERROR's HERE } Now, when I run the controller factory fails to find a controller, i then manually create my own basic controller. I then call the extension method 'Execute404NotFound' on this controller instance. That's fine .. until it runs the ExecuteResult(..) method. Why? the controller has no ControllerContext data. As such, the ExecuteResult crashes because it requires some ControllerContext. So - can someone out there help me? see what I'm doing wrong. Remember - i'm trying to get my controller factory to manually / programmatically call a method on a controller which of course would return an ActionResult. Please help!

    Read the article

  • Facebook and retrieving a users "wall"

    - by Neurofluxation
    I have been given the dubious task of working with the Facebook API. I have managed to get quite far with all the little bits and pieces (bringing in "fan pages" and "friend lists"). However, I cannot seem to import a users wall into my App using SOLELY Javascript. I have this code to bring in the users friends: var widget_div = document.getElementById("profile_pics"); FB.ensureInit(function () { FB.Facebook.get_sessionState().waitUntilReady(function() { FB.Facebook.apiClient.friends_get(null, function(result) { var markup = ""; var num_friends = result ? Math.min(100, result.length) : 0; if (num_friends > 0) { for (var i=0; i<num_friends; i++) { markup += "<div align='left' class='commented' style='background-color: #fffbcd; border: 1px solid #9d9b80; padding: 0px 10px 0px 0px; margin-bottom: 5px; width: 75%; height: 50px; font-size: 16px;'><fb:profile-pic size='square' uid='"+result[i]+"' facebook-logo='true'></fb:profile-pic><div style='float: right; padding-top: 15px;'><fb:name uid='"+result[i]+"'></fb:name></div></div>"; } } widget_div.innerHTML = markup; FB.XFBML.Host.parseDomElement(widget_div); }); }); }); /*******YOUR FRIENDS******/ FB.XFBML.Host.parseDomTree(); Any idea whether I can change this to retrieve the Walls? Thanks in advance you great people! ^_^

    Read the article

  • Persist changes in C

    - by Mohit Deshpande
    I am developing a database-like application that stores a a structure containing: struct Dictionary { char *key; char *value; struct Dictionary *next; }; As you can see, I am using a linked list to store information. But the problem begins when the user exits out of the program. I want the information to be stored somewhere. So I was thinking of storing the linked list in a permanent or temporary file using fopen, then, when the user starts the program, retrieve the linked list. Here is the method that prints the linked list to the console: void PrintList() { int count = 0; struct Dictionary *current; current = head; if (current == NULL) { printf("\nThe list is empty!"); return; } printf(" Key \t Value\n"); printf(" ======== \t ========\n"); while (current != NULL) { count++; printf("%d. %s \t %s\n", count, current->key, current->value); current = current->next; } } So I am thinking of modifying this method to print the information through fprintf instead of printf and then the program would just get the infomation from the file. Could someone help me on how I can read and write to this file? What kind of file should it be, temporary or regular? How should I format the file (like I was thinking of just having the key first, then the value, then a newline character)?

    Read the article

  • Get remote image using cURL then resample.

    - by Chris
    I want to be able to retrieve a remote image from a webserver, resample it, and then serve it up to the browser AND save it to a file. Here is what I have so far: $ch = curl_init(); // set URL and other appropriate options curl_setopt($ch, CURLOPT_URL, "$rURL"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HEADER, 0); // grab URL and pass it to the browser $out = curl_exec($ch); // close cURL resource, and free up system resources curl_close($ch); $imgRes = imagecreatefromstring($out); imagejpeg($imgRes, $filename, 70); header("Content-Type: image/jpg"); header("Content-length: ".filesize($filename)); header("Content-Transfer-Encoding: binary"); header("Content-Length: ".filesize($filename)); readfile("$filename"); exit(); Update Updated code to include imjpeg step to save the image as lower quality. But how do I then, efficiently, serve this up to the browser. I currently, later in the code, do this readfile("$filename"); along with some header information but that means I'm reading the file back in again which seems inefficient.

    Read the article

  • how to convert bitmap into byte array in android

    - by satyamurthy
    hi all i am new in android i am implementing image retrieve in sdcard in image convert into bitmap and in bitmap convert in to byte array please forward some solution of this code public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); ImageView image = (ImageView) findViewById(R.id.picview); EditText value=(EditText)findViewById(R.id.EditText01); FileInputStream in; BufferedInputStream buf; try { in = new FileInputStream("/sdcard/pictures/1.jpg"); buf = new BufferedInputStream(in,1070); System.out.println("1.................."+buf); byte[] bMapArray= new byte[buf.available()]; buf.read(bMapArray); Bitmap bMap = BitmapFactory.decodeByteArray(bMapArray, 0, bMapArray.length); for (int i = 0; i < bMapArray.length; i++) { System.out.print("bytearray"+bMapArray[i]); } image.setImageBitmap(bMap); value.setText(bMapArray.toString()); if (in != null) { in.close(); } if (buf != null) { buf.close(); } } catch (Exception e) { Log.e("Error reading file", e.toString()); } } } solution is 04-12 16:41:16.168: INFO/System.out(728): 4......................[B@435a2908 this is the result for byte array not display total byte array this array size is 1034 please forward some solution

    Read the article

  • Jquery Json dynamic variable name generation

    - by PlanetUnknown
    I make a jquery .ajax call and I'm expecting a json result. The catch is, if there are say 5 authors, I'll get author_details_0, author_details_1, author_details_2, etc.... How can I dynamically construct the name of the variable to retrieve from json ? I don't know how many authors I'll get, there could be hundreds. $.ajax({ type: "POST", url: "/authordetails/show_my_details/", data: af_pTempString, dataType: "json", beforeSend: function() { }, success: function(jsonData) { console.log("Incoming from backend : " + jsonData.toSource()); if(jsonData.AuthorCount) { console.log("Number of Authors : " + jsonData.AuthorCount); for (i = 0; i < jsonData.AuthorCount; i++) { temp = 'author_details_' + i; <-------------------This is the name of the variable I'm expecting. console.log("Farm information : " + eval(jsonData.temp) ); <----- This doesn't work, how can I get jsonData.author_details_2 for example, 'coz I don't know how many authors are there, there could be hundreds. } } Please let me know if you have any idea how to solve this ! Much appreciated.

    Read the article

  • Outlook 2010 Retrieving and restricting appointments programmatically causing recurrences to be incl

    - by Mike Dearing
    I wrote a winforms app that uses Microsoft.Office.Interop.Outlook to retrieve and restrict appointments based upon the date range entered by a user. This worked fine with Outlook 2007 installed, however now that some users have updated to Outlook 2010 the appointment retrieval is pulling back incorrect appointments along with the correct ones falling within the specified date range. The additional incorrect appointments being retrieved always appear to be recurring appointments. I was wondering if this is a known bug and if so what exactly is happening that is causing these additional recurring appointments to come in? I'd rather not have to throw in a workaround where I step through the items after they have been restricted and remove the extra ones, when this functionality works fine with 2007. Note: I've not recompiled or updated any code when experiencing this issue, just running the old program. This is the spot in my code where appointments are being restricted. This is similar to the way advised in the following msdn link: http://msdn.microsoft.com/en-us/library/bb611267.aspx Microsoft.Office.Interop.Outlook.Items outlookItems = outlookMapiFolder.Items.Restrict( "[Start] >= '" + outlookImport.startDay.ToString("g") + "' AND [Start] <= '" + outlookImport.endDay.ToString("g") + "'"); outlookItems.Sort("[Start]", Type.Missing); outlookItems.IncludeRecurrences = true;

    Read the article

  • SQL GUID Vs Integer

    - by Dal
    Hi I have recently started a new job and noticed that all the SQL tables use the GUID data type for the primary key. In my previous job we used integers (Auto-Increment) for the primary key and it was a lot more easier to work with in my opinion. For example, say you had two related tables; Product and ProductType - I could easily cross check the 'ProductTypeID' column of both tables for a particular row to quickly map the data in my head because its easy to store the number (2,4,45 etc) as opposed to (E75B92A3-3299-4407-A913-C5CA196B3CAB). The extra frustration comes from me wanting to understand how the tables are related, sadly there is no Database diagram :( A lot of people say that GUID's are better because you can define the unique identifer in your C# code for example using NewID() without requiring SQL SERVER to do it - this also allows you to know provisionally what the ID will be.... but I've seen that it is possible to still retrieve the 'next auto-incremented integer' too. A DBA contractor reported that our queries could be up to 30% faster if we used the Integer type instead of GUIDS... Why does the GUID data type exist, what advantages does it really provide?... Even if its a choice by some professional there must be some good reasons as to why its implemented?

    Read the article

  • Duplicate Items Using Join in NHibernate Map

    - by Colin Bowern
    I am trying to retrieve the individual detail rows without having to create an object for the parent. I have a map which joins a parent table with the detail to achieve this: Table("UdfTemplate"); Id(x => x.Id, "Template_Id"); Map(x => x.FieldCode, "Field_Code"); Map(x => x.ClientId, "Client_Id"); Join("UdfFields", join => { join.KeyColumn("Template_Id"); join.Map(x => x.Name, "COLUMN_NAME"); join.Map(x => x.Label, "DISPLAY_NAME"); join.Map(x => x.IsRequired, "MANDATORY_FLAG") .CustomType<YesNoType>(); join.Map(x => x.MaxLength, "DATA_LENGTH"); join.Map(x => x.Scale, "DATA_SCALE"); join.Map(x => x.Precision, "DATA_PRECISION"); join.Map(x => x.MinValue, "MIN_VALUE"); join.Map(x => x.MaxValue, "MAX_VALUE"); }); When I run the query in NH using: Session.CreateCriteria(typeof(UserDefinedField)) .Add(Restrictions.Eq("FieldCode", code)).List<UserDefinedField>(); I get back the first row three times as opposed to the three individual rows it should return. Looking at the SQL trace in NH Profiler the query appears to be correct. The problem feels like it is in the mapping but I am unsure how to troubleshoot that process. I am about to turn on logging to see what I can find but I thought I would post here in case someone with experience mapping joins knows where I am going wrong.

    Read the article

  • Copy an entity in Google App Engine datastore in Python without knowing property names at 'compile'

    - by Gordon Worley
    In a Python Google App Engine app I'm writing, I have an entity stored in the datastore that I need to retrieve, make an exact copy of it (with the exception of the key), and then put this entity back in. How should I do this? In particular, are there any caveats or tricks I need to be aware of when doing this so that I get a copy of the sort I expect and not something else. ETA: Well, I tried it out and I did run into problems. I would like to make my copy in such a way that I don't have to know the names of the properties when I write the code. My thinking was to do this: #theThing = a particular entity we pull from the datastore with model Thing copyThing = Thing(user = user) for thingProperty in theThing.properties(): copyThing.__setattr__(thingProperty[0], thingProperty[1]) This executes without any errors... until I try to pull copyThing from the datastore, at which point I discover that all of the properties are set to None (with the exception of the user and key, obviously). So clearly this code is doing something, since it's replacing the defaults with None (all of the properties have a default value set), but not at all what I want. Suggestions?

    Read the article

  • NTLM Authentication fails when behind Proxy server

    - by Jan Petersen
    Hi All, I've seen a number of post about consuming Web Services from behind a proxy server, but none that seams to address this problem. I'm building a desktop application, using Java, JAX-WS in NetBeans. I have a working prototype, that can query the server for authentication mode, successfully authenticate and retrieve a list of web site. However, if I run the same app from a network that is behind a proxy server (the proxy does not require authentication), then I'm running into trouble. I have sniffed the traffic, and noticed the following: Behind Proxy # Result Protocol Host URL 1 200 HTTP host.domain.com /_vti_bin/Authentication.asmx 2 401 HTTP host.domain.com /_vti_bin/Webs.asmx 3 401 HTTP host.domain.com /_vti_bin/Webs.asmx 4 401 HTTP host.domain.com /_vti_bin/Webs.asmx 5 401 HTTP host.domain.com /_vti_bin/Webs.asmx Without Proxy # Result Protocol Host URL 1 200 HTTP host.domain.com /_vti_bin/Authentication.asmx 2 401 HTTP host.domain.com /_vti_bin/Webs.asmx 3 401 HTTP host.domain.com /_vti_bin/Webs.asmx 4 401 HTTP host.domain.com /_vti_bin/Webs.asmx 5 401 HTTP host.domain.com /_vti_bin/Webs.asmx 6 200 HTTP host.domain.com /_vti_bin/Webs.asmx When running the code from a network without a proxy server, I successfully Authentication with the server, but when I'm behind the proxy server, the traffic is cut-off at the 5th message, and thus don't succeed. I know from the Java docs that On Microsoft Windows platforms, NTLM authentication attempts to acquire the user credentials from the system without prompting the user's authenticator object. If these credentials are not accepted by the server then the user's authenticator will be called. Given that my Authentication code is called only ones, and only as the 5th attempt, it appears as if the connection is dropped when behind the proxy server before my Authentication object is used. Is there any way I can control the behavior of Authentication module, to not have it use the system credentials? I have put the source text java class files of a demo app up, showing the issue at the following urls (it's a bit to long even in the short demo form to post here). link text Br Jan

    Read the article

  • Django sphinx works only after app restart.

    - by Lhiash
    Hi, I've set up django-sphinx in my project, which works perfectly only for some time. Later it always returns empty result set. Surprisingly restarting django app fixes it. And search works again but again only for short time (or very limiter number of queries). Heres my sphinx.conf: source src_questions { # data source type = mysql sql_host = xxxxxx sql_user = xxxxxx #replace with your db username sql_pass = xxxxxx #replace with your db password sql_db = xxxxxx #replace with your db name # these two are optional sql_port = xxxxxx #sql_sock = /var/lib/mysql/mysql.sock # pre-query, executed before the main fetch query sql_query_pre = SET NAMES utf8 # main document fetch query sql_query = SELECT q.id AS id, q.title AS title, q.tagnames AS tags, q.html AS text, q.level AS level \ FROM question AS q \ WHERE q.deleted=0 \ # optional - used by command-line search utility to display document information sql_query_info = SELECT title, id, level FROM question WHERE id=$id sql_attr_uint = level } index questions { # which document source to index source = src_questions # this is path and index file name without extension # you may need to change this path or create this folder path = /home/rafal/core_index/index_questions # docinfo (ie. per-document attribute values) storage strategy docinfo = extern # morphology morphology = stem_en # stopwords file #stopwords = /var/data/sphinx/stopwords.txt # minimum word length min_word_len = 3 # uncomment next 2 lines to allow wildcard (*) searches min_infix_len = 1 enable_star = 1 # charset encoding type charset_type = utf-8 } # indexer settings indexer { # memory limit (default is 32M) mem_limit = 64M } # searchd settings searchd { # IP address on which search daemon will bind and accept # optional, default is to listen on all addresses, # ie. address = 0.0.0.0 address = 127.0.0.1 # port on which search daemon will listen port = 3312 # searchd run info is logged here - create or change the folder log = ../log/sphinx.log # all the search queries are logged here query_log = ../log/query.log # client read timeout, seconds read_timeout = 5 # maximum amount of children to fork max_children = 30 # a file which will contain searchd process ID pid_file = searchd.pid # maximum amount of matches this daemon would ever retrieve # from each index and serve to client max_matches = 1000 } and heres my search part from views.py: content = Question.search.query(keywords) if level: content = content.filter(level=level)#level is array of integers There are no errors in any logs, it just isnt returning any results. All help would be most appreciated.

    Read the article

  • Urgent: Sort HashSet() function data in sequence

    - by vincent low
    i am new to java, the function i like to perform is something like: i will load a series of data from a file, into my hashSet() function. the problem is, i able to enter all the data in sequence, but i cant retrieve it out in sequence base on the account name in the file. any 1 can help to give a comment? below is my code: public Set retrieveHistory(){ Set dataGroup = new HashSet(); try{ File file = new File("C:\\Documents and Settings\\vincent\\My Documents\\NetBeansProjects\\vincenttesting\\src\\vincenttesting\\vincenthistory.txt"); BufferedReader br = new BufferedReader(new FileReader(file)); String data = br.readLine(); while(data != null){ System.out.println("This is all the record:"+data); Customer cust = new Customer(); //break the data based on the , String array[] = data.split(","); cust.setCustomerName(array[0]); cust.setpassword(array[1]); cust.setlocation(array[2]); cust.setday(array[3]); cust.setmonth(array[4]); cust.setyear(array[5]); cust.setAmount(Double.parseDouble(array[6])); cust.settransaction(Double.parseDouble(array[7])); dataGroup.add(cust); //then proced to read next customer. data = br.readLine(); } br.close(); }catch(Exception e){ System.out.println("error" +e); } return dataGroup; } public static void main(String[] args) { FileReadDataModel fr = new FileReadDataModel(); Set customerGroup = fr.retrieveHistory(); System.out.println(e); for(Object obj : customerGroup){ Customer cust = (Customer)obj; System.out.println("Cust name :" +cust.getCustomerName()); System.out.println("Cust amount :" +cust.getAmount()); }

    Read the article

  • Which PHP library I should choose to work with CouchDB?

    - by Guss
    I want to try playing with CouchDB for a new project I'm writing (as a hobby, not part of my job). I'm well versed in PHP, but I haven't programmed with CouchDB at all, and also I have little experience with non-SQL databases. From looking at CouchDB's "Getting Started with PHP" document they recommend using a third-party library or writing your own client using their RESTful HTTP API. I think I'd rather not mess with writing protocol implementations myself at this point, but what is your experience with writing PHP to work with CouchDB? I haven't tested any of the alternatives yet, but I looked at: PHPillow : I'm interested in the way they implement ORM. I wasn't planning to do ORM, but my problem domain probably map well to that method. PHP Object Freezer: seems like a poor man's ORM - I can use it to implement an actual ORM, or just as an easy store/retrieve document API but it seems too primitive. PHP-on-Couch : Also a bit simple, but they have an interesting API for views and from the documentation it looks usable enough. PHP CouchDB Extension : From the listed options this looks like it has the best chance of making it into the PHP mainline itself, and also has the most complete API. Any opinion one wish to share on each library is welcome.

    Read the article

  • Log4Net GetLogger creates rolling files even for the unreferenced files

    - by ybastiand
    Hi, I have a C# solution that contains three executables. I have each of these three executables sharing the same log4net configuration file. At startup of each of the executable, they retrieve a logger (one logger per executable, as per configuration file further below). When one of the executable performs Log.GetLogger(), it creates all the rolling files instead of only the one rolling file that is referred to as appender-ref in the executable's logger configuration. For instance, when I startup my sending daemon executable, it performs Log.GetLogger("SendingDaemonLogger") which creates 3 files Log/RuleScheduler.txt, Log/NotificationGenerator.txt and Log/NotificationSender.txt instead of only the desired Log/NotificationSender.txt. Then when I startup another of the executables, for instance the rule scheduler daemon, this other process cannot write in Log/RuleScheduler.txt because it has been created and locked by the sending daemon process. I am guessing that there may be three different solutions to my problem: The GetLogger should only create the rolling file appenders that are referenced in the config I should have one config file per executable, this way each config file could list only one rolling file appender and starting each of the executable would not create the rolling files of the other daemons. I am however reluctant to do this because some of the configuration (SMTP appender, console appender) is shared between the daemons and I don't want to have duplicate copies to maintain. Unless there is a way to have a config file including another one? Maybe there is a way to configure the rolling file so that concurrent access across processes is allowed? This solution still isn't perfect in my opinion because any of the daemons should not be creating the rolling files of some other daemons. Thanks in advance for your help! I have difficulties for posting the config file properly here (this website interprets as HTML). Please go to the following link for seeing my log4net configuration file: log4Net configuration file

    Read the article

  • How does Select statement works in a Dynamic Linq Query?

    - by Richard77
    Hello, 1) I've a Product table with 4 columns: ProductID, Name, Category, and Price. Here's the regular linq to query this table. public ActionResult Index() { private ProductDataContext db = new ProductDataContext(); var products = from p in db.Products where p.Category == "Soccer" select new ProductInfo { Name = p.Name, Price = p.Price} return View(products); } Where ProductInfo is just a class that contains 2 properties (Name and Price). The Index page Inherits ViewPage - IEnumerable - ProductInfo. Everything works fine. 2) To dynamicaly execute the above query, I do this: Public ActionResult Index() { var products = db.Products .Where("Category = \"Soccer\"") .Select(/* WHAT SOULD I WRITE HERE TO SELECT NAME & PRICE?*/) return View(products); } I'm using both 'System.Lind.Dynamic' namespace and the DynamicLibrary.cs (downloaded from ScottGu blog). Here are my questions: What expression do I use to select only Name and Price? (Most importantly) How do I retrieve the data in my view? (i.e. What type the ViewPage inherits? ProductInfo?)

    Read the article

  • Get Chinese Romanization from Google Translate API

    - by krubo
    The Google language translate API works cleanly to translate into Chinese: <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script> google.load('language','1'); function googletrans(text) { google.language.translate(text,'en','zh',function(result) { alert(result.translation); }); } </script> <input onchange="googletrans(this.value);"> Example input: "Hello" Result: "??" My problem is I can't get the Romanization (pronunciation using English letters). This is a known issue. Now the data is right there on translate.google.com (Example input: "Hello" Result: "Ni hao") and I can even see it by pointing my browser to: http://translate.google.com/translate_a/t?client=t&text=hello&hl=en&sl=en&tl=zh-CN&otf=2&pc=0 Result: {"sentences":[{"trans":"??","orig":"hello","translit":"Ni hao"}], "dict":[{"pos":"interjection","terms":["?"]}],"src":"en"} But somehow when I try to get this URL with ajax it fails (XMLHttpRequest Exception 101). Is there any way to retrieve this Romanization data with ajax?

    Read the article

  • Out Of Memory error while executing mysqldump

    - by Nishaz Salam
    Hi, I am getting the following error when trying to backup a database using mysqldump from the command prompt. C:\Documents and Settings\bobC:\Adobe\LiveCycle8.2\mysql\bin\mysqldump --quick --add-locks --lock-tables -c --default-character-set=utf8 --skip-opt -pxxxx -u adobe -r C:\Adobe\LiveCycle8.2\configurationManager\working\upgrade\mysql\adobe. sql -B adobe --port=3306 --host=localhost mysqldump: Out of memory (Needed 10380928 bytes) mysqldump: Got error: 2008: MySQL client ran out of memory when retrieving data from server As you can see i am using the --quick and --skip-opt too; cannot figure out what is causing the issue. The server log has the following messages 100420 15:16:39 InnoDB: Error: cannot allocate 4814100 bytes of memory for InnoDB: a BLOB with malloc! Total allocated memory InnoDB: by InnoDB 33427880 bytes. Operating system errno: 2 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. 100420 15:16:40 InnoDB: Warning: could not allocate 3814100 + 1000000 bytes to retrieve InnoDB: a big column. Table name adobe/tb_form_data Any help on this regard is highly appreciated P.S: The backup works fine without any issues when i use the MYSQL Administrator, but since an external app( adobe livecycle installer) uses the above command to backup the database during install, i need to get this working. Thanks, Nishaz Salam

    Read the article

  • Why is my Map broken?

    - by Kirk
    Scenario: Creating a server which has Room objects which contain User objects. I want to store the rooms in a Map of some sort by Id (a string). Desired Behavior: When a user makes a request via the server, I should be able to look up the Room by id from the library and then add the user to the room, if that's what the request needs. Currently I use the static function in my Library.java class where the Map is stored to retrieve Rooms: public class Library { private static Hashtable<String, Rooms> myRooms = new Hashtable<String, Rooms>(); public static addRoom(String s, Room r) { myRooms.put(s, r); } public static Room getRoomById(String s) { return myRooms.get(s); } } In another class I'll do the equivalent of myRoom.addUser(user); What I'm observing using Hashtable, is that no matter how many times I add a user to the Room returned by getRoomById, the user is not in the room later. I thought that in Java, the object that was returned was essentially a reference to the data, the same object that was in the Hashtable with the same references; but, it isn't behaving like that. Is there a way to get this behavior? Maybe with a wrapper of some sort? Am I just using the wrong variant of map? Help?

    Read the article

  • How to access "Custom" or non-System TFS workitem fields using PowerShell?

    - by DaBozUK
    When using PowerShell to extract information from TFS, I find that I can get at the standard fields but not "Custom" fields. I'm not sure custom is the correct term, but for example if I look at the Process Editor in VS2008 and edit the Work Item type, there are fields such as listed below, with Name, Type and RefName: Title String System.Title State String System.State Rev Integer System.Rev Changed By String System.ChangedBy I can access these with Get-TfsItemHistory: Get-TfsItemHistory "$/path" -Version "D01/12/10~" -R | Select -exp WorkItems | Format-Table Title, State, Rev, ChangedBy -Auto So far so good. However, there are also some other fields in the WorkItem type, which I'm calling "Custom" or non-System fields, e.g.: Activated By String Microsoft.VSTS.Common.ActivatedBy Resolved By String Microsoft.VSTS.Common.ResolvedBy And the following command does not retrieve the data, just spaces. Get-TfsItemHistory "$/path" -Version "D01/12/10~" -R | Select -exp WorkItems | Format-Table ActivatedBy, ResolvedBy -Auto I've also tried the names in quotes, the fully qualified refname, but no luck. How do you access these "non-System" fields? Thanks Boz UPDATE: From Keith's answer I can get the fields I need: Get-TfsItemHistory "$/Hermes/Main" -Version "D01/12/10~" -Recurse ` | Select ChangeSetId, Comment -exp WorkItems ` | Select ChangeSetId, Comment, @{n='WI-Id'; e={$_.Id}}, Title -exp Fields ` | Where {$_.ReferenceName -eq 'Microsoft.VSTS.Common.ResolvedBy'} ` | Format-Table ChangesetId, Comment, WI-Id, Title, @{n='Resolved By'; e={$_.Value}} -Auto

    Read the article

  • How to prevent Hibernate from nullifying relationship column during entity removal

    - by Grzegorz
    I have two entities, A and B. I need to easily retrieve entities A, joined with entities B on the condition of equal values of some column (some column from A equal to some column in B). Those columns are not primary or foreign keys, they contain same business data. I just need to have access from each instance of A to the collection of B's with the same value of this column. So I model it like this: class A { @OneToMany @JoinColumn(name="column_in_B", referencedColumnName="column_in_A") Collection<B> bs; This way, I can run queries like "select A join fetch a.bs b where b...." (Actually, the real relationship here is many-to-many. But when I use @ManyToMany, Hibernate forces me to use join table, which doesnt exist here. So I have to use @OneToMany as workaround). So far so good. The main problem is: whenever I delete an instance of A, hibernate calls "Update B set column_in_B = null", becuase it thinks the column_in_B is foreign key pointing at primary key in A (and because row in A is deleted, it tries to clean the foreign key in B). BUT the column_in_B IS NOT a foreign key, and can't be modified, because it causes data lost (and this column is NOT NULL anyway in my case, causing data integerity exception to be thrown). Plese help me with this. How to model such relationships with Hibernate? (I would call it "virtual relationships", or "secondary relationships" or so: as they are not based on foreign keys, they are just some shortcuts which allows for retrieving related objects and quering for them with HQL)

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >