Search Results

Search found 3912 results on 157 pages for 'distributed caching'.

Page 109/157 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Can I use/include gsdll32.dll redmonnt.dll (no modificaitons) in a commercial application for my cli

    - by scriptmaster
    Hi - I am not a license expert, however after a lot of research, I am still struggling to answer the following questions and would like to know if my assumptions are right! 1) Is it legal to include gsdll32.dll and redmonnt.dll in a commercial product? A: ? 2) Should I release any source code of the commercial app where I am using this library? A: ? 3) Is there a commercial license (a small fee that my client can pay to use these libraries) to GhostScript? A: ? 4) Whats the meaning of "although it may be distributed ("aggregated") with commercial products." in this para: http://pages.cs.wisc.edu/~ghost/doc/AFPL/8.00/New-user.htm#Commercial_use A: ? 5) What are the alternate solutions? A: ?

    Read the article

  • Anyone up to creating a tomcat based alternative for GAE?

    - by bach
    Hi, If we had the possibility to run GAE app without any code change on our servlet engine that would be great because: in case that google changes their billing policy we can just jump to our own server or in case their current policy doesn't fit our app needs we can do stuff which is not allowed in the GAE, compromising a 1 JVM, 1 DB We don't actually need a distributed system but more of a realtime system with synchronize, true locking mechanisms, other servers/software installed on the server machine, socket interface etc... Such a package should include at least: TomCat (or equivalent) DataNucleus Access Platform (Task Queue service) Any idea if it's easy to get such a thing or if it's already exist somewhere? Thanks

    Read the article

  • plist vs static array

    - by morticae
    Generally, I use static arrays and dictionaries for containing lookup tables in my classes. However, with the number of classes creeping quickly into the hundreds, I'm hesitant to continue using this pattern. Even if these static collections are initialized lazily, I've essentially got a bounded memory leak going on as someone uses my app. Most of these are arrays of strings so I can convert strings into NSInteger constants that can be used with switch statements, etc. I could just recreate the array/dictionary on every call, but many of these functions are used heavily and/or in tight loops. So I'm trying to come up with a pattern that is both performant and not persistent. If I store the information in a plist, does the iphoneOS do anything intelligent about caching those when loaded? Do you have another method that might be related?

    Read the article

  • What exactly happens when I call a web service method using PHP::SOAP?

    - by Bedwyr Humphreys
    Say I have a simple client/server scenario with one method: // client code $client = new SoapCLient("service.wsdl"); $result = $client.getPi(); ... // server code function getPi(){ return 3.141; } $server = new SoapServer("service.wsdl"); $server.addFunction("getPi"); $server.handle(); Am I right in thinking that when the client makes a call to the getPi() method the addfunction() gets called everytime? Is this really how PHP SOAP web services work? Or is there some caching going on? Thanks.

    Read the article

  • Distributing WPF apps to a legacy user base: How seamless is it?

    - by Christian Nunciato
    I'm considering developing a WPF application, to be hosted by a legacy Windows app (C++), and I'm trying to get a better sense of how feasible it'll be to do so, given the broad user base I'm targeting. Knowing WPF targets .NET 3.5, I'm looking for some insight as to what the field looks like right now -- who's already got the runtime, whether it's distributed by Windows Update, if so, how (e.g., as an optional or required download, to which operating systems, etc.), whether XP pre-XP2 supports it (and how), and so on. The current version's got many thousands of users, using all manner of Windows operating systems, and while I'd very much like to leverage WPF to breathe some life into their user experience, I want to make sure I'm not shutting anyone out by doing so, or burdening them with a download they might have to do manually. I realize most, or all, of this information's out there already, in various places, but I figured I'd ask here first, since I'm sure some of you've probably already gone down this road and have valuable experiences to share. Thanks in advance!

    Read the article

  • data directory in automake

    - by Alex Farber
    I have some data files that should be distributed with my program. Using dist_pkgdata_DATA in Makefile.am, I get these files installed to /usr/local/data/share/package-name. The problem is that data is read-only, and my program needs to modify it. Playing with dist_sharedstate_DATA, dist_localstate_DATA, dist-data_DATA varibles, I got different installation directories, like /usr/local/com, usr/local/var, but data is always read-only. How can I distribute modifiable data files with my package? I need some common directory for all users, or maybe local data in a user directory.

    Read the article

  • Efficiency of while(true) ServerSocket Listen

    - by Submerged
    I am wondering if a typical while(true) ServerSocket listen loop takes an entire core to wait and accept a client connection (Even when implementing runnable and using Thread .start()) I am implementing a type of distributed computing cluster and each computer needs every core it has for computation. A Master node needs to communicate with these computers (invoking static methods that modify the algorithm's functioning). The reason I need to use sockets is due to the cross platform / cross language capabilities. In some cases, PHP will be invoking these java static methods. I used a java profiler (YourKit) and I can see my running ServerSocket listen thread and it never sleeps and it's always running. Is there a better approach to do what I want? Or, will the performance hit be negligible? Please, feel free to offer any suggestion if you can think of a better way (I've tried RMI, but it isn't supported cross-language. Thanks everyone

    Read the article

  • What is a good Java web crawler library?

    - by DrDee
    Hi, I am about to develop a crawler in Java but don't feel like reinventing the wheel. A quick Google search gives a whole bunch of Java libraries to build a web crawler. Besides that Nutch is of course a very robust package but seems a bit too advanced for my needs. I only need to crawl a handful websites a week containing a couple of 1000 pages each. Which open source Java library would you recommend considering: speed multithreading (or even distributed) extending it with new functionality active maintained and documentation?

    Read the article

  • Can EC2 instances be set up to come from different IP ranges?

    - by Joshua Frank
    I need to run a web crawler and I want to do it from EC2 because I want the HTTP requests to come from different IP ranges so I don't get blocked. So I thought distributing this on EC2 instances might help, but I can't find any information about what the outbound IP range will be. I don't want to go to the trouble of figuring out the extra complexity of EC2 and distributed data, only to find that all the instances use the same address block and I get blocked by the server anyway. NOTE: This isn't for a DoS attack or anything. I'm trying to harvest data for a legitimate business purpose, I'm respecting robots.txt, and I'm only making one request per second, but the host is still shutting me down. Edit: Commenter Paul Dixon suggests that the act of blocking even my modest crawl indicates that the host doesn't want me to crawl them and therefore that I shouldn't do it (even assuming I can work around the blocking). Do people agree with this?

    Read the article

  • Setting up a web developer lab for learning purposes

    - by Saleh Al-Abbas
    I'm not a developer by profession. Therefore, I'm not exposed to real world technical problems that face professional developers. I read/heard about web farms, integration between different systems, load balancing ... etc. Therefore, I was wondering if there are ways for the individual developer to create an environment that simulates real world situations with minimal number of machines like: web farms & caching simulating many users accessing your website (Pressure tests?) Performance load balancing anything you think I should consider. By the way, I have a server machine and 1 PC. and I don't mind investing in tools and software. PS. I'm using Microsoft technologies for development but I hope this is not a limiting factor. Thanks

    Read the article

  • Actionscript 3 : XML cached in local testing

    - by Boun
    Hi, My question is about XML loading. I need to avoid xml caching. On a web server, the technique is adding a random param to reload each time the XML file. But on local testing (in Flash CS4 IDE, CTRL + Enter), the following lines are not possible : var my_date : Date; path = "toto.xml?time="+my_date.getSeconds()+my_date.getMilliseconds(); Is there any trick to bypass this issue ? I've read on different forum about the "delete" method, we delete the xml object and then recreate one new. In my case, I put : myXML = null; myXML = new XML ( loadedData ); But it doesn't work at all. I spent many hours on that problem, if anyone has a good solution... Thank you.

    Read the article

  • Should a given URI in a RESTful architecture always return the same response?

    - by keithjgrant
    This is kind of a follow-up question to this one. So is having a unique response for any given URI a core tenant of RESTful architecture? A lot of discussion here tends that direction, but I haven't seen it anywhere as a "hard and fast" rule. I understand the value of it (for caching, crawling, passing links, etc), but I also see things like the twitter API violate it (A request to http://api.twitter.com/1/statuses/friends_timeline.xml will vary based on the username given), and I understand there are times when it may be necessary--not to mention that a chronologically paged resource will also change as new elements are added. Should I strive for varied responses from the same URI to be eliminated altogether, or do I just accept that sometimes it isn't practical, and as long as I minimize its occurrence, I'll be in decent shape.

    Read the article

  • Grails / GORM, Disable First-level Cache

    - by Stephen Swensen
    Suppose I have the following Domain class mapping to a legacy table, utilizing read-only second-level cache, and having a transient field: class DomainObject { static def transients = ['userId'] Long id Long userId static mapping = { cache usage: 'read-only' table 'SOME_TABLE' } } I have a problem, references to DomainObject are being shared due to first-level caching, and thus transient fields are writing over each other. For example, def r1 = DomainObject.get(1) r1.userId = 22 def r2 = DomainObject.get(1) r2.userId = 34 assert r1.userId == 34 That is, r1 and r2 are references to the same instance. This is undesirable, I would like to cache the table data without sharing references. Any ideas? [Edit] Understanding the situation better now, I believe my question boils down to the following: Is there anyway to disable first level cache for a specific domain class while still using second level cache?

    Read the article

  • Standard way of distributing source code?

    - by penyuan
    I am relatively new to programming, and have built a few working C++ commandline programs with Xcode in Mac OS X (no dependencies on Mac-only libraries or APIs). My question is: What is the standard way of packaging and distributing the source code (and possibly compiled binaries)? i.e. Almost all Linux programs seemed to be distributed that a user simply needs to run ./configure && make && make install from the source directory. Thank you.

    Read the article

  • SQL Server, Remote Stored Procedure, and DTC Transactions

    - by marc
    Our organization has a lot of its essential data in a mainframe Adabas database. We have ODBC access to this data and from C# have queried/updated it successfully using ODBC/Natural "stored procedures". What we'd like to be able to do now is to query a mainframe table from within SQL Server 2005 stored procs, dump the results into a table variable, massage it, and join the result with native SQL data as a result set. The execution of the Natural proc from SQL works fine when we're just selecting it; however, when we insert the result into a table variable SQL seems to be starting a distributed transaction that in turn seems to be wreaking havoc with our connections. Given that we're not performing updates, is it possible to turn off this DTC-escalation behavior? Any tips on getting DTC set up properly to talk to DataDirect's (formerly Neon Systems) Shadow ODBC driver?

    Read the article

  • Zend_Cache_Backend_Sqlite vs Zend_Cache_Backend_File

    - by Alekc
    Hi, Currently i'm using Zend_Cache_Backend_File for caching my project (especially responses from external web services). I was wandering if I could find some benefit in migrating the structure to Zend_Cache_Backend_Sqlite. Possible advantages are: File system is well-ordered (only 1 file in cache folder) Removing expired entries should be quicker (my assumption, since zend wouldn't need to scan internal-metadatas for expiring date of each cache) Possible disadvantages: Finding record to read (with files zend check if file exists based on filename and should be a bit quicker) in term of speed. I've tried to search a bit in internet but it seems that there are not a lot of discussion about the matter. What do you think about it? Thanks in advance.

    Read the article

  • Linux periodically "losing" ability to connect to server via SSH?

    - by gct
    I know this isn't exactly a programming question, but it popped up in my use of git for programming projects at least. I've got a web server that I use to host my git repos on, but my ubuntu box seems to "lose" the ability to connect to it via SSH. I'll get a "connection refused" error when I try to ssh or use git. Rebooting my local machine will fix the problem, but only temporarily. I can still connect to the web interface just fine, and the problem manifests with other servers as well. I've been working around it by pulling my changes over to my laptop and pushing from there, but that's sub-optimal as you can imagine. Has anyone seen something like this? I'd be tempted to say it's some kind of IP caching problem, but I can't connect even using the IP address of the server directly... Running Ubuntu 9.04

    Read the article

  • ASP.NET OutPutCache VaryByParam and VaryByHeader with AJAX

    - by DennyDotNet
    I'm trying to do some caching using VaryByParam AND VaryByHeader. When an AJAX request comes in I return a partial XHTML. When a regular request comes in I send the partial XHTML page with header / footer. I tried to cache the page by doing: [OutputCache( Duration = 5, VaryByParam = "nickname,page", VaryByHeader = "X-Requested-With" )] However this doesn't work... if I do a regular request first then run the AJAX call I get the full cached page instead of the partial and vice-versa. Seems like VaryByHeader is being ignored. Is it because X-Requested-With is omitted on normal requests? Or perhaps it's doing VaryByParam OR VaryByHeader? My obvious way around this is for AJAX requests to call a different method which only returns partial pages, however I'd like to avoid that if possible. I'm using ASP.NET MVC 1.0 with the OutputCacheAttribute.

    Read the article

  • Does the iPhone compress images saved within my app's documents directory?

    - by Jane Sales
    We are caching images downloaded from our server. We write them to our local storage like this: NSArray *paths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0] ; NSString* folder = [[documentsDirectory stringByAppendingPathComponent:@"flook.images"] retain]; NSString* fileName = [folder stringByAppendingFormat:@"/%@", aBaseFilename]; BOOL writeSuccess = [anImageData writeToFile:fileName atomically:NO]; The downloaded images are always the expected size, around 45-85KB. Later, we read images from our cache like this: NSArray *paths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0] ; NSString* folder = [[documentsDirectory stringByAppendingPathComponent:@"flook.images"] retain]; NSString* fileName = [folder stringByAppendingFormat:@"/%@", aBaseFilename]; image = [UIImage imageWithContentsOfFile:fileName]; Occasionally, the images returned from this cache read are much smaller because they are much more compressed - around 5-10KB. Has the OS done this to us?

    Read the article

  • How do I prevent an ASP.NET MVC deployment on IIS 6.0, using wildcard mapping, from attempting to ha

    - by Rob
    As noted by the title, what is the best way to configure an IIS 6.0 deployment of an ASP.NET MVC application such that connections to hidden shares are ignored? The application in question is using wildcard mapping to allow for clean URLs since we are planning on upgrading to IIS 7.0 in the near future and we are also handling the caching and compression issues with a custom library so we would like to avoid turning wildcard mapping off unless absolutely necessary. Below is a one of the errors from the application to give you an example of what we are seeing. -------------------------------------------------------------------------------- System.Web.HttpException -------------------------------------------------------------------------------- Time Stamp - 03 Mar 2010, 08:11:44 Path - N/A, Internal Server Operation Message - The controller for path '/C$' could not be found or it does not implement IController. Target Site - System.Web.Mvc.IController GetControllerInstance(System.Type) Stack Trace - at System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(Type controllerType) at System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContextBase httpContext) at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContext httpContext) at System.Web.Mvc.MvcHandler.System.Web.IHttpHandler.ProcessRequest(HttpContext httpContext) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) --------------------------------------------------------------------------------

    Read the article

  • XML over HTTP with JMS and Spring

    - by Will Sumekar
    I have a legacy HTTP server where I need to send an XML file over HTTP request (POST) using Java (not browser) and the server will respond with another XML in its HTTP response. It is similar to Web Service but there's no WSDL and I have to follow the existing XML structure to construct my XML to be sent. I have done a research and found an example that matches my requirement here. The example uses HttpClient from Apache Commons. (There are also other examples I found but they use java.net networking package (like URLConnection) which is tedious so I don't want to use them). But it's also my requirement to use Spring and JMS. I know from Spring's reference that it's possible to combine HttpClient, JMS and Spring. My question is, how? Note that it's NOT in my requirement to use HttpClient. If you have a better suggestion, I'm welcome. Appreciate it. For your reference, here's the XML-over-HTTP example I've been talking about: /* * $Header: * $Revision$ * $Date$ * ==================================================================== * * Copyright 2002-2004 The Apache Software Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * ==================================================================== * * This software consists of voluntary contributions made by many * individuals on behalf of the Apache Software Foundation. For more * information on the Apache Software Foundation, please see * <http://www.apache.org/>. * * [Additional notices, if required by prior licensing conditions] * */ import java.io.File; import java.io.FileInputStream; import org.apache.commons.httpclient.HttpClient; import org.apache.commons.httpclient.methods.InputStreamRequestEntity; import org.apache.commons.httpclient.methods.PostMethod; /** * * This is a sample application that demonstrates * how to use the Jakarta HttpClient API. * * This application sends an XML document * to a remote web server using HTTP POST * * @author Sean C. Sullivan * @author Ortwin Glück * @author Oleg Kalnichevski */ public class PostXML { /** * * Usage: * java PostXML http://mywebserver:80/ c:\foo.xml * * @param args command line arguments * Argument 0 is a URL to a web server * Argument 1 is a local filename * */ public static void main(String[] args) throws Exception { if (args.length != 2) { System.out.println( "Usage: java -classpath <classpath> [-Dorg.apache.commons."+ "logging.simplelog.defaultlog=<loglevel>]" + " PostXML <url> <filename>]"); System.out.println("<classpath> - must contain the "+ "commons-httpclient.jar and commons-logging.jar"); System.out.println("<loglevel> - one of error, "+ "warn, info, debug, trace"); System.out.println("<url> - the URL to post the file to"); System.out.println("<filename> - file to post to the URL"); System.out.println(); System.exit(1); } // Get target URL String strURL = args[0]; // Get file to be posted String strXMLFilename = args[1]; File input = new File(strXMLFilename); // Prepare HTTP post PostMethod post = new PostMethod(strURL); // Request content will be retrieved directly // from the input stream // Per default, the request content needs to be buffered // in order to determine its length. // Request body buffering can be avoided when // content length is explicitly specified post.setRequestEntity(new InputStreamRequestEntity( new FileInputStream(input), input.length())); // Specify content type and encoding // If content encoding is not explicitly specified // ISO-8859-1 is assumed post.setRequestHeader( "Content-type", "text/xml; charset=ISO-8859-1"); // Get HTTP client HttpClient httpclient = new HttpClient(); // Execute request try { int result = httpclient.executeMethod(post); // Display status code System.out.println("Response status code: " + result); // Display response System.out.println("Response body: "); System.out.println(post.getResponseBodyAsString()); } finally { // Release current connection to the connection pool // once you are done post.releaseConnection(); } } }

    Read the article

  • Is it possible to partition more than one way at a time in SQL Server?

    - by meeting_overload
    I'm considering various ways to partition my data in SQL Server. One approach I'm looking at is to partition a particular huge table into 8 partitions, then within each of these partitions to partition on a different partition column. Is this even possible in SQL Server, or am I limited to definining one parition column+function+scheme per table? I'm interested in the more general answer, but this strategy is one I'm considering for Distributed Partitioned View, where I'd partition the data under the first scheme using DPV to distribute the huge amount of data over 8 machines, and then on each machine partition that portion of the full table on another parition key in order to be able to drop (for example) sub-paritions as required.

    Read the article

  • Load balancer - how to write one for a custom application?

    - by Poni
    Hi! I've written a simple server application which will run distributed on several machines. My question is how does a network load balancer works, in general? I've heard of round-robin and other algorithms, but what I haven't got answer to is how does the process really goes? In socket terms. The client connects to one of the load balancer machines, asks for a "free-to-connect-to" server and simply connects to it? That's the simpliest way I can think of. .. or, does it use the load balancer as a proxy (that implies that all the NBs must be always connected to the application servers, and data is transferred through them)? It's more of a general question. How would you do this? Thank you all!

    Read the article

  • DLL dependant on curllib.dll - How can I fix this?

    - by haraldo
    Hi there, I'm new to developing in C++. I've developed a dll where I'm using curllib to make HTTP requests. When running the dll via depend.exe it notifies me that my dll now depends on the curllib.dll. This simply doesn't work for me. My dll is set as a static library not shared and will be distributed on its own. I cannot rely on a user having libcurl.dll installed. I thought by including libcurl into my project this is all that would be needed and my dll could be independent. If this is impossible to resolve is there an alternative method I can use to create HTTP requests? Obviously I would prefer to use libcurl. Thanks in advance.

    Read the article

  • git: having 2 push/pull repos in sync (or 1 push/pull and 1 pull in sync)

    - by xavjuan
    Hello, We work on multiple geographically seperate sites. Today I have our git clones all live on one site A. Then users from site B have to ssh over to do a git clone or to push in changes. These are bare repos where the update is through pushes. Ideally, for git clone/push performance, I'd like to limit having to go over ssh. I'd like to have a copy of git repo X live on site A and site B... and have some syncing mechanism between them. OR to have X live on both sites, but only allow pushing to A (and have that setup correctly at clone time on B) I'm worried about the case where someone on site A pushes changes to the repo at site A at the same time that someone on site B pushes a truely conflicting change to the repo at site B. Is there some 'sync'ing solution built into git for distributed open repos like this? Or a way to have a clone from X set the origin/parent to the X from the other site? thanks, -John

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >