Search Results

Search found 43200 results on 1728 pages for 'large object pattern'.

Page 315/1728 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • Call HttpWebRequest in another thread as UI with Task class - avoid to dispose object created in Task scope

    - by John
    I would like call HttpWebRequest on another thread as UI, because I must make 200 request or server and downloaded image. My scenation is that I make a request on server, create image and return image. This I make in another thread. I use Task class, but it call automaticaly Dispose method on all object created in task scope. So I return null object from this method. public BitmapImage CreateAvatar(Uri imageUri, int sex) { if (imageUri == null) return CreateDefaultAvatar(sex); BitmapImage image = null; new Task(() => { var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { Byte[] buffer = new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); image = new BitmapImage { CreateOptions = BitmapCreateOptions.None, CacheOption = BitmapCacheOption.OnLoad }; image.BeginInit(); image.StreamSource = new MemoryStream(buffer); image.EndInit(); image.Freeze(); } }).Start(); return image; } How avoid it? Thank Mr. Jon Skeet try this: private Stream GetImageStream(Uri imageUri) { Byte[] buffer = null; //new Task(() => //{ var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { buffer= new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); } //}).Start(); return new MemoryStream(buffer); } It return object which is null a than try this: private Stream GetImageStream(Uri imageUri) { Byte[] buffer = null; new Task(() => { var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { buffer= new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); } }).Start(); return new MemoryStream(buffer); } Method above return null

    Read the article

  • Changing property type in class that implements interface with object type property.

    - by used2could
    I know the title is a bit confusing but bare with me. (I'm up for suggestions on a new title lol) I'm writing a TemplateEngine that will allow me to use my own markup in text based files. I'm wanting to add controls as plugins as the application matures. Currently i've got a structure like the following: interface IControl string Id object Value class Label : IControl string Id string Value class Repeater : IControl string Id List<IControl> Value Now you'll see the strange part right away in the Repeater class with the Value property. I was hoping that having the Value type as object in the interface would allow me the flexibility to expand the controls as i go along. The compiler doesn't like this and for good reason i guess. Does anyone have any suggestions how to accomplish this? Note: Please don't go into suggesting things like use Spark View Engine for templating. There is a reason i'm creating extra work for myself.

    Read the article

  • What algorithm .Net use for searching a pattern in a string?

    - by Hun1Ahpu
    I'm studying string searching algorithms now and wondering what algorithm is used for .NET String.Contains function for example. Reflector shows that this function is used but I have no idea what its name means. private static extern int InternalFindNLSStringEx(IntPtr handle, string localeName, int flags, string source, int sourceCount, int startIndex, string target, int targetCount);

    Read the article

  • How to handle large dataset with JPA (or at least with Hibernate)?

    - by Roman
    I need to make my web-app work with really huge datasets. At the moment I get either OutOfMemoryException or output which is being generated 1-2 minutes. Let's put it simple and suppose that we have 2 tables in DB: Worker and WorkLog with about 1000 rows in the first one and 10 000 000 rows in the second one. Latter table has several fields including 'workerId' and 'hoursWorked' fields among others. What we need is: count total hours worked by each user; list of work periods for each user. The most straightforward approach (IMO) for each task in plain SQL is: 1) select Worker.name, sum(hoursWorked) from Worker, WorkLog where Worker.id = WorkLog.workerId group by Worker.name; //results of this query should be transformed to Multimap<Worker, Long> 2) select Worker.name, WorkLog.start, WorkLog.hoursWorked from Worker, WorkLog where Worker.id = WorkLog.workerId; //results of this query should be transformed to Multimap<Worker, Period> //if it was JDBC then it would be vitally //to set resultSet.setFetchSize (someSmallNumber), ~100 So, I have two questions: how to implement each of my approaches with JPA (or at least with Hibernate); how would you handle this problem (with JPA or Hibernate of course)?

    Read the article

  • How do I maintain coherency between model and view-model in MVVM pattern?

    - by Mike Garrett
    Problem Statement I'm writing a very basic WPF application to alter the contents of a configuration file. The data format is an XML file with a schema. I want to use it as a learning project for MVVM, so I have duly divided the code into Model: C# classes auto-generated from xsd.exe View-Model: View-friendly representation of the Model. View: Xaml and empty code behind I understand how the View-Model can make View-binding a breeze. However, doesn't that leave the View-Model <- Model semantics very awkward? Xsd.exe generates C# classes with arrays for multiple XML elements. However, at the V-VM level you need Observable Collections. Questions: Does this really mean I have to keep two completely different collection types representing the same data in coherence? What are the best practices for maintaining coherence between the Model and the View-Model?

    Read the article

  • PHP - REGEX - use string for pattern but exclude it from being removed!

    - by aSeptik
    Hi All guys! i'm pretty new on regex, i have learned something by the way, but is still pour knowledge! so i want ask you for clarification on how it work! assuming i have the following strings, as you can see they can be formatted little different way one from another but they are very similar! DTSTART;TZID="America/Chicago":20030819T000000 DTEND;TZID="America/Chicago":20030819T010000 DTSTART;TZID=US/Pacific DTSTART;VALUE=DATE now i want replace everything between the first A-Z block and the colon so for example i would keep DTSTART:20030819T000000 DTEND:20030819T010000 DTSTART DTSTART so on my very noobs knowledge i have worked out this shitty regex! :-( preg_replace( '/^[A-Z](?!;[A-Z]=[\w\W]+):$/m' , '' , $data ); but why i'm sure this regex will not work!? :-) Pls help me! PS: the title of question is pretty explaned, i want also know how for example use a well know string block for match another... preg_replace( '/^[DTSTART](?!;[A-Z]=[\w\W]+):$/m' , '' , $data ); ..without delete DTSTART Thanks for the time! Regards Luca Filosofi

    Read the article

  • How to create column width in CSS that expands with large images yet stays a default size for normal

    - by ChrisJF
    I am creating an HTML5 web page with a one column layout. Basically, it is a forum thread with individual posts. I have specified in my CSS file the column to be 600px wide and centered it in the window using margin: 0 auto;. However, some images that are in the individual posts are larger than 600px and spill out of the column. I'd like to widen an individual post to fit the larger images. However, I want all the other posts to still be 600px wide. Right now, I'm just using overflow:auto which will create a scroll bar, but this is less than ideal. Is this possible to have the an individual post width grow for larger content yet stay fixed for normal content? Is this possible using just pure CSS? Thanks in advance!

    Read the article

  • Spring Security and the Synchronizer Token J2EE pattern, problem when authentication fails.

    - by dfuse
    Hey, we are using Spring Security 2.0.4. We have a TransactionTokenBean which generates a unique token each POST, the bean is session scoped. The token is used for the duplicate form submission problem (and security). The TransactionTokenBean is called from a Servlet filter. Our problem is the following, after a session timeout occured, when you do a POST in the application Spring Security redirects to the logon page, saving the original request. After logging on again the TransactionTokenBean is created again, since it is session scoped, but then Spring forwards to the originally accessed url, also sending the token that was generated at that time. Since the TransactionTokenBean is created again, the tokens do not match and our filter throws an Exception. I don't quite know how to handle this elegantly, (or for that matter, I can't even fix it with a hack), any ideas? This is the code of the TransactionTokenBean: public class TransactionTokenBean implements Serializable { public static final int TOKEN_LENGTH = 8; private RandomizerBean randomizer; private transient Logger logger; private String expectedToken; public String getUniqueToken() { return expectedToken; } public void init() { resetUniqueToken(); } public final void verifyAndResetUniqueToken(String actualToken) { verifyUniqueToken(actualToken); resetUniqueToken(); } public void resetUniqueToken() { expectedToken = randomizer.getRandomString(TOKEN_LENGTH, RandomizerBean.ALPHANUMERICS); getLogger().debug("reset token to: " + expectedToken); } public void verifyUniqueToken(String actualToken) { if (getLogger().isDebugEnabled()) { getLogger().debug("verifying token. expected=" + expectedToken + ", actual=" + actualToken); } if (expectedToken == null || actualToken == null || !isValidToken(actualToken)) { throw new IllegalArgumentException("missing or invalid transaction token"); } if (!expectedToken.equals(actualToken)) { throw new InvalidTokenException(); } } private boolean isValidToken(String actualToken) { return StringUtils.isAlphanumeric(actualToken); } public void setRandomizer(RandomizerBean randomizer) { this.randomizer = randomizer; } private Logger getLogger() { if (logger == null) { logger = Logger.getLogger(TransactionTokenBean.class); } return logger; } } and this is the Servlet filter (ignore the Ajax stuff): public class SecurityFilter implements Filter { static final String AJAX_TOKEN_PARAM = "ATXTOKEN"; static final String TOKEN_PARAM = "TXTOKEN"; private WebApplicationContext webApplicationContext; private Logger logger = Logger.getLogger(SecurityFilter.class); public void init(FilterConfig config) { setWebApplicationContext(WebApplicationContextUtils.getWebApplicationContext(config.getServletContext())); } public void destroy() { } public void doFilter(ServletRequest req, ServletResponse response, FilterChain chain) throws IOException, ServletException { HttpServletRequest request = (HttpServletRequest) req; if (isPostRequest(request)) { if (isAjaxRequest(request)) { log("verifying token for AJAX request " + request.getRequestURI()); getTransactionTokenBean(true).verifyUniqueToken(request.getParameter(AJAX_TOKEN_PARAM)); } else { log("verifying and resetting token for non-AJAX request " + request.getRequestURI()); getTransactionTokenBean(false).verifyAndResetUniqueToken(request.getParameter(TOKEN_PARAM)); } } chain.doFilter(request, response); } private void log(String line) { if (logger.isDebugEnabled()) { logger.debug(line); } } private boolean isPostRequest(HttpServletRequest request) { return "POST".equals(request.getMethod().toUpperCase()); } private boolean isAjaxRequest(HttpServletRequest request) { return request.getParameter("AJAXREQUEST") != null; } private TransactionTokenBean getTransactionTokenBean(boolean ajax) { return (TransactionTokenBean) webApplicationContext.getBean(ajax ? "ajaxTransactionTokenBean" : "transactionTokenBean"); } void setWebApplicationContext(WebApplicationContext context) { this.webApplicationContext = context; } }

    Read the article

  • How to activate a solution in SharePoint 2010 using client object model?

    - by Boris
    Here's the situation: I have a customized SharePoint2010 site created. I saved that site as a site template, which has created a solution. I want to be able to activate that solution using SharePoint2010 client object model. Is that possible? If yes, could you show me how to do it? If no, then could you show me how it can be done using standard SharePoint object model, or any other method? Thank you for all the help.

    Read the article

  • Will MyISAM type tables work better than InnoDB for large numbers of columns?

    - by Ethan
    I have a MySQL InnoDB table with 238 columns. 56 of them are TEXT type, 27 are VARCHAR(255). I am getting MySQL error 139 when users insert data sometimes. After research I found that I'm probably running into InnoDB row size/column size/column count limitations. (I'm putting it that way because the specific limits among those three things are interdependent.) Docs on InnoDB give an idea of the limits. If I switch this table to MyISAM is it likely to solve the problem? I understand the maximum row size of 65,535 bytes. I think I'm hitting InnoDB's additional 8000 byte limit somehow. Switching to PostgreSQL is also a remote option, but would take much longer.

    Read the article

  • Can I mix static and shared-object libraries when linking?

    - by SiegeX
    I have a C project that produces ten executables, all of which I would like to be statically linked. The problem I am facing is that one of these executables uses a 3rd-party library of which only the shared-object version is available. If I pass the -static flag to gcc, ld will error saying it can't find the library in question (I presume it's looking for the .a version) and the executable will not be built. Ideally, I would like to be able to tell 'ld' to statically link as much as it can and fail over to the shared object library if a static library cannot be found. In the interium I tried something like gcc -static -lib1 -lib2 -shared -lib3rdparty foo.c -o foo.exe in hopes that 'ld' would statically link in lib1 and lib2 but only have a run-time dependence on lib3rdparty. Unfortunatly, this did not work as I intended; instead the -shared flag overwrote the -static flag and everything was compiled as shared-objects. Is statically linking an all-or-nothing deal, or is there some way I can mix and match?

    Read the article

  • How to troubleshoot a Highcharts script that's not rendering data when date is added and hanging the JS engine with large datasets?

    - by ylluminate
    I have a Highchart JS graph that I'm building in Rails (although I don't think Ruby has real bearing on this problem unless it's the Date output format) to which I'm adding the timestamp of each datapoint. Presently the array of floats is rendering fine without timestamps, however when I add the timestamp to the series it fails to rend. What's worse is that when the series has hundreds of entries all sorts of problems arise, not the least of which is the browser entirely hanging and requiring a force quit / kill. I'm using the following to build the array of arrays data series: series1 = readings.map{|row| [(row.date.to_i * 1000), (row.data1.to_f if BigDecimal(row.data1) != BigDecimal("-1000.0"))] } This yields a result like this: series: [{"name":"Data 1","data":[[1326262980000,1.79e-09],[1326262920000,1.29e-09],[1326262860000,1.22e-09],[1326262800000,1.42e-09],[1326262740000,1.29e-09],[1326262680000,1.34e-09],[1326262620000,1.31e-09],[1326262560000,1.51e-09],[1326262500000,1.24e-09],[1326262440000,1.7e-09],[1326262380000,1.24e-09],[1326262320000,1.29e-09],[1326262260000,1.53e-09],[1326262200000,1.23e-09],[1326262140000,1.21e-09]],"color":"blue"}] Yet nothing appears on the graph as noted. Notwithstanding, when I compare the data series in one of their very similar examples here: http://www.highcharts.com/demo/spline-irregular-time It appears that really the data series are formatted identically (except in mine I use the timestamp vs date method). This leads me to think I've got a problem with the timestamp output, but I'm just not able to figure out where / how as I'm converting the date output to an integer multipled by 1000 to convert it to milliseconds as per explained in a similar Railscasts tutorial. I would very much appreciate it if someone could point me in the right direction here as to what I may be doing wrong. What could cause no data to appear on the graph in smaller sized sets (<100 points) and when into the hundreds causes an apparent hang in the javascript engine in this case? Perhaps ultimately the key lies here as this is the entire js that's being generated and not rendering: jQuery(function() { // 1. Define JSON options var options = { chart: {"defaultSeriesType":"spline","renderTo":"chart_name"}, title: {"text":"Title"}, legend: {"layout":"vertical","style":{}}, xAxis: {"title":{"text":"UTC Time"},"type":"datetime"}, yAxis: [{"title":{"text":"Left Title","margin":10}},{"title":{"text":"Right Groups Title"},"opposite":true}], tooltip: {"enabled":true}, credits: {"enabled":false}, plotOptions: {"areaspline":{}}, series: [{"name":"Data 1","data":[[1326262980000,1.79e-08],[1326262920000,1.69e-08],[1326262860000,1.62e-08],[1326262800000,1.42e-08],[1326262740000,1.29e-08],[1326262680000,1.34e-08],[1326262620000,1.31e-08],[1326262560000,1.51e-08],[1326262500000,1.64e-08],[1326262440000,1.7e-08],[1326262380000,1.64e-08],[1326262320000,1.69e-08],[1326262260000,1.53e-08],[1326262200000,1.23e-08],[1326262140000,1.21e-08]],"color":"blue"},{"name":"Data 2","data":[[1326262980000,9.79e-09],[1326262920000,9.78e-09],[1326262860000,9.8e-09],[1326262800000,9.82e-09],[1326262740000,9.88e-09],[1326262680000,9.89e-09],[1326262620000,1.3e-06],[1326262560000,1.32e-06],[1326262500000,1.33e-06],[1326262440000,1.33e-06],[1326262380000,1.34e-06],[1326262320000,1.33e-06],[1326262260000,1.32e-06],[1326262200000,1.32e-06],[1326262140000,1.32e-06]],"color":"red"}], subtitle: {} }; // 2. Add callbacks (non-JSON compliant) // 3. Build the chart var chart = new Highcharts.StockChart(options); });

    Read the article

  • Can an Excel VBA UDF called from the worksheet ever be passed an instance of any Excel VBA object mo

    - by jtolle
    I'm 99% sure that the answer is "no", but I'm wondering if someone who is 100% sure can say so. Consider a VBA UDF: Public Function f(x) End Function When you call this from the worksheet, 'x' will be a number, string, boolean, error, array, or object of type 'Range'. Can it ever be, say, an instance of 'Chart', 'ListObject', or any other Excel-VBA object model class? (The question arose from me moving to Excel 2007 and playing with Tables, and wondering if I could write UDFs that accept them as parameters instead of Ranges. The answer to that seems to be no, but then I realized I didn't know for sure in general.)

    Read the article

  • MongoDB RSS Feed Entries, Embed the Entries in the Feed Object?

    - by Patrick Klingemann
    I am saving a reference to an RSS Feed in MongoDB, each Feed has an ever growing list of Entries. As I'm designing my schema, I'm concerned about this statement from the MongoDB Schema Design - Embed vs. Reference Documentation: If the amount of data to embed is huge (many megabytes), you may read the limit on size of a single object. This will surely happen if I understand the statement correctly. So the question is, I am correct to assume that I should not embed the Feed Entries within a Feed because I'll eventually reach the limit on size of a single object?

    Read the article

  • How to generate one large dependency map for the whole project that builds with makefiles?

    - by Stan
    I have a gigantic project that is built using makefiles. Running make at the root of the project takes over 20 minutes when no files have changed (i.e. just traversing the project and checking for updated files). I'd like to create a dependency map that will tell me which directories I need to run 'make' in based on the file(s) changed. I already have a list of updated files that I can get from my version control system, and I'd like to skip the 20 minutes of traversing and get straight to the locations that do need to be recompiled. The project has a mix of several languages and custom tools, so this would ideally be language-independent (i.e. it would only process all makefiles to generate dependencies). I'll settle for a C/C++-specific solution, too, as the majority of the project is in C++. The project is built on Linux.

    Read the article

  • Keeping a large volume of data in Session - Suggestions / alternatives?

    - by Fishcake
    I'm developing a web app for which the client wants us to query their data as little as possible. The data will be coming from a Microsoft CRM instance. So we've agreed that data will only be queried as and when it is needed, therefore if a web user wants to see a list of contacts (for example) that list is fetched into a local DataTable. Then if a new contact is created on the website the new contact is sent to CRM and added to the local DataTable at the same time. Likewise for edits. If the user then looks at their contacts again the data will just come from the local DataTable. At the moment local data is being kept in Session but my concern is that too much memory will start being used up. However traffic is expected to be pretty small, perhaps no more than 20 concurrent users so am I worrying about nothing or is there a better way you can suggest to handle this?

    Read the article

  • How to structure an index for type ahead for extremely large dataset using Lucene or similar?

    - by Pete
    I have a dataset of 200million+ records and am looking to build a dedicated backend to power a type ahead solution. Lucene is of interest given its popularity and license type, but I'm open to other open source suggestions as well. I am looking for advice, tales from the trenches, or even better direct instruction on what I will need as far as amount of hardware and structure of software. Requirements: Must have: The ability to do starts with substring matching (I type in 'st' and it should match 'Stephen') The ability to return results very quickly, I'd say 500ms is an upper bound. Nice to have: The ability to feed relevance information into the indexing process, so that, for example, more popular terms would be returned ahead of others and not just alphabetical, aka Google style. In-word substring matching, so for example ('st' would match 'bestseller') Note: This index will purely be used for type ahead, and does not need to serve standard search queries. I am not worried about getting advice on how to set up the front end or AJAX, as long as the index can be queried as a service or directly via Java code. Up votes for any useful information that allows me to get closer to an enterprise level type ahead solution

    Read the article

  • Why is activeX failing to create an object from a Labview executable?

    - by user360734
    Here is my scenario. I am using Quicktest Pro (VB) to create an ActiveX object from a Labview VI that I built into an executable. In the build specs of the VI I have enabled ActiveX server option (ActiveX server name: "MyLabviewProgram") and in the VI ToolsOptionsVI Server: Configuration the ActiveX box is checked. So in QTP my code is: Set IvApp = CreateObject("MyLabviewProgram.Application") Set Vi = IvApp.getVIReference("MyLabviewVI.vi") Vi.Call ParamNames, ParamVals Upon running this I get a Run Error on the first line: ActiveX component can't create object: 'MyLabviewProgram.Application' I am having trouble figuring out why it errors. From National Instruments website they have a step in on one of their community pages about "LabVIEW Executable Used as ActiveX Server". The step is after building the EXE, 5. Run the EXE at least once on the target to activate the .TLB file. I've run the executable but not sure what they mean by on the target. Does anyone have a suggestion on what I need to do to get this working?

    Read the article

  • How do I implement a listener pattern over RMI using Spring?

    - by predhme
    So here is a generalized version of our application desgin: @Controller public class MyController { @Autowired private MyServiceInterface myServiceInterface; @RequestMapping("/myURL") public @ResponseBody String doSomething() { MyListenerInterface listener = new MyListenerInterfaceImpl(); myServiceInterface.doThenCallListener(listener); // do post stuff } } public interface MyListenerInterface { public void callA(); public void callB(); } public class MyListenerInterfaceImpl implements MyListenerInterface { // ... omitted for clarity } public interface MyServiceInterface { public void doThenCallListener(MyListenerInterface listener); } public class MyServiceImpl { public void doThenCallListener(MyListenerInterface listener) { // do stuff listener.callA(); } } Basically I have a controller that is being called via AJAX in which I am looking to return a response as a string. However, I need to make a call to the backend (MyServiceInterface). That guy is exposed through RMI by using Spring (man that was easy). But the service method as described requires a listener to be registered for invokation completion purposes. So what I assume I need to achieve is transparently to the backend make it so that when the listener methods are called, really the call is going over RMI. I would have thought Spring would have a simple way to wrap a POJO (not a service singleton) with RMI calls. I looked through their documentation but they had nothing besides exposing services via RMI. Could someone point me in the right direction?

    Read the article

  • [Python] How can I speed up unpickling large objects if I have plenty of RAM?

    - by conradlee
    It's taking me up to an hour to read a 1-gigabyte NetworkX graph data structure using cPickle (its 1-GB when stored on disk as a binary pickle file). Note that the file quickly loads into memory. In other words, if I run: import cPickle as pickle f = open("bigNetworkXGraph.pickle","rb") binary_data = f.read() # This part doesn't take long graph = pickle.loads(binary_data) # This takes ages How can I speed this last operation up? Note that I have tried pickling the data both in using both binary protocols (1 and 2), and it doesn't seem to make much difference which protocol I use. Also note that although I am using the "loads" (meaning "load string") function above, it is loading binary data, not ascii-data. I have 128gb of RAM on the system I'm using, so I'm hoping that somebody will tell me how to increase some read buffer buried in the pickle implementation.

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >