Search Results

Search found 43200 results on 1728 pages for 'large object pattern'.

Page 315/1728 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • Is an IQueryable a query or just an object which can be queried?

    - by Albic
    I'm kinda confused what the IQueryable interface actually represents. The MSDN documentation for IQueryable says: "Provides functionality to evaluate queries against a specific data source." The documentation for IQueryProvider says: "Defines methods to create and execute queries that are described by an IQueryable object." The name and the documentation summary suggest that it is an object/data store which can be queried. The second quote and the fact the ObjectQuery class from the Entity Framework implements IQueryable suggest it is a query which can be executed. Did I misunderstood something or is it really kinda fuzzy?

    Read the article

  • Call HttpWebRequest in another thread as UI with Task class - avoid to dispose object created in Task scope

    - by John
    I would like call HttpWebRequest on another thread as UI, because I must make 200 request or server and downloaded image. My scenation is that I make a request on server, create image and return image. This I make in another thread. I use Task class, but it call automaticaly Dispose method on all object created in task scope. So I return null object from this method. public BitmapImage CreateAvatar(Uri imageUri, int sex) { if (imageUri == null) return CreateDefaultAvatar(sex); BitmapImage image = null; new Task(() => { var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { Byte[] buffer = new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); image = new BitmapImage { CreateOptions = BitmapCreateOptions.None, CacheOption = BitmapCacheOption.OnLoad }; image.BeginInit(); image.StreamSource = new MemoryStream(buffer); image.EndInit(); image.Freeze(); } }).Start(); return image; } How avoid it? Thank Mr. Jon Skeet try this: private Stream GetImageStream(Uri imageUri) { Byte[] buffer = null; //new Task(() => //{ var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { buffer= new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); } //}).Start(); return new MemoryStream(buffer); } It return object which is null a than try this: private Stream GetImageStream(Uri imageUri) { Byte[] buffer = null; new Task(() => { var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { buffer= new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); } }).Start(); return new MemoryStream(buffer); } Method above return null

    Read the article

  • How to create column width in CSS that expands with large images yet stays a default size for normal

    - by ChrisJF
    I am creating an HTML5 web page with a one column layout. Basically, it is a forum thread with individual posts. I have specified in my CSS file the column to be 600px wide and centered it in the window using margin: 0 auto;. However, some images that are in the individual posts are larger than 600px and spill out of the column. I'd like to widen an individual post to fit the larger images. However, I want all the other posts to still be 600px wide. Right now, I'm just using overflow:auto which will create a scroll bar, but this is less than ideal. Is this possible to have the an individual post width grow for larger content yet stay fixed for normal content? Is this possible using just pure CSS? Thanks in advance!

    Read the article

  • What algorithm .Net use for searching a pattern in a string?

    - by Hun1Ahpu
    I'm studying string searching algorithms now and wondering what algorithm is used for .NET String.Contains function for example. Reflector shows that this function is used but I have no idea what its name means. private static extern int InternalFindNLSStringEx(IntPtr handle, string localeName, int flags, string source, int sourceCount, int startIndex, string target, int targetCount);

    Read the article

  • Spring Security and the Synchronizer Token J2EE pattern, problem when authentication fails.

    - by dfuse
    Hey, we are using Spring Security 2.0.4. We have a TransactionTokenBean which generates a unique token each POST, the bean is session scoped. The token is used for the duplicate form submission problem (and security). The TransactionTokenBean is called from a Servlet filter. Our problem is the following, after a session timeout occured, when you do a POST in the application Spring Security redirects to the logon page, saving the original request. After logging on again the TransactionTokenBean is created again, since it is session scoped, but then Spring forwards to the originally accessed url, also sending the token that was generated at that time. Since the TransactionTokenBean is created again, the tokens do not match and our filter throws an Exception. I don't quite know how to handle this elegantly, (or for that matter, I can't even fix it with a hack), any ideas? This is the code of the TransactionTokenBean: public class TransactionTokenBean implements Serializable { public static final int TOKEN_LENGTH = 8; private RandomizerBean randomizer; private transient Logger logger; private String expectedToken; public String getUniqueToken() { return expectedToken; } public void init() { resetUniqueToken(); } public final void verifyAndResetUniqueToken(String actualToken) { verifyUniqueToken(actualToken); resetUniqueToken(); } public void resetUniqueToken() { expectedToken = randomizer.getRandomString(TOKEN_LENGTH, RandomizerBean.ALPHANUMERICS); getLogger().debug("reset token to: " + expectedToken); } public void verifyUniqueToken(String actualToken) { if (getLogger().isDebugEnabled()) { getLogger().debug("verifying token. expected=" + expectedToken + ", actual=" + actualToken); } if (expectedToken == null || actualToken == null || !isValidToken(actualToken)) { throw new IllegalArgumentException("missing or invalid transaction token"); } if (!expectedToken.equals(actualToken)) { throw new InvalidTokenException(); } } private boolean isValidToken(String actualToken) { return StringUtils.isAlphanumeric(actualToken); } public void setRandomizer(RandomizerBean randomizer) { this.randomizer = randomizer; } private Logger getLogger() { if (logger == null) { logger = Logger.getLogger(TransactionTokenBean.class); } return logger; } } and this is the Servlet filter (ignore the Ajax stuff): public class SecurityFilter implements Filter { static final String AJAX_TOKEN_PARAM = "ATXTOKEN"; static final String TOKEN_PARAM = "TXTOKEN"; private WebApplicationContext webApplicationContext; private Logger logger = Logger.getLogger(SecurityFilter.class); public void init(FilterConfig config) { setWebApplicationContext(WebApplicationContextUtils.getWebApplicationContext(config.getServletContext())); } public void destroy() { } public void doFilter(ServletRequest req, ServletResponse response, FilterChain chain) throws IOException, ServletException { HttpServletRequest request = (HttpServletRequest) req; if (isPostRequest(request)) { if (isAjaxRequest(request)) { log("verifying token for AJAX request " + request.getRequestURI()); getTransactionTokenBean(true).verifyUniqueToken(request.getParameter(AJAX_TOKEN_PARAM)); } else { log("verifying and resetting token for non-AJAX request " + request.getRequestURI()); getTransactionTokenBean(false).verifyAndResetUniqueToken(request.getParameter(TOKEN_PARAM)); } } chain.doFilter(request, response); } private void log(String line) { if (logger.isDebugEnabled()) { logger.debug(line); } } private boolean isPostRequest(HttpServletRequest request) { return "POST".equals(request.getMethod().toUpperCase()); } private boolean isAjaxRequest(HttpServletRequest request) { return request.getParameter("AJAXREQUEST") != null; } private TransactionTokenBean getTransactionTokenBean(boolean ajax) { return (TransactionTokenBean) webApplicationContext.getBean(ajax ? "ajaxTransactionTokenBean" : "transactionTokenBean"); } void setWebApplicationContext(WebApplicationContext context) { this.webApplicationContext = context; } }

    Read the article

  • How to activate a solution in SharePoint 2010 using client object model?

    - by Boris
    Here's the situation: I have a customized SharePoint2010 site created. I saved that site as a site template, which has created a solution. I want to be able to activate that solution using SharePoint2010 client object model. Is that possible? If yes, could you show me how to do it? If no, then could you show me how it can be done using standard SharePoint object model, or any other method? Thank you for all the help.

    Read the article

  • How do I maintain coherency between model and view-model in MVVM pattern?

    - by Mike Garrett
    Problem Statement I'm writing a very basic WPF application to alter the contents of a configuration file. The data format is an XML file with a schema. I want to use it as a learning project for MVVM, so I have duly divided the code into Model: C# classes auto-generated from xsd.exe View-Model: View-friendly representation of the Model. View: Xaml and empty code behind I understand how the View-Model can make View-binding a breeze. However, doesn't that leave the View-Model <- Model semantics very awkward? Xsd.exe generates C# classes with arrays for multiple XML elements. However, at the V-VM level you need Observable Collections. Questions: Does this really mean I have to keep two completely different collection types representing the same data in coherence? What are the best practices for maintaining coherence between the Model and the View-Model?

    Read the article

  • Will MyISAM type tables work better than InnoDB for large numbers of columns?

    - by Ethan
    I have a MySQL InnoDB table with 238 columns. 56 of them are TEXT type, 27 are VARCHAR(255). I am getting MySQL error 139 when users insert data sometimes. After research I found that I'm probably running into InnoDB row size/column size/column count limitations. (I'm putting it that way because the specific limits among those three things are interdependent.) Docs on InnoDB give an idea of the limits. If I switch this table to MyISAM is it likely to solve the problem? I understand the maximum row size of 65,535 bytes. I think I'm hitting InnoDB's additional 8000 byte limit somehow. Switching to PostgreSQL is also a remote option, but would take much longer.

    Read the article

  • PHP - REGEX - use string for pattern but exclude it from being removed!

    - by aSeptik
    Hi All guys! i'm pretty new on regex, i have learned something by the way, but is still pour knowledge! so i want ask you for clarification on how it work! assuming i have the following strings, as you can see they can be formatted little different way one from another but they are very similar! DTSTART;TZID="America/Chicago":20030819T000000 DTEND;TZID="America/Chicago":20030819T010000 DTSTART;TZID=US/Pacific DTSTART;VALUE=DATE now i want replace everything between the first A-Z block and the colon so for example i would keep DTSTART:20030819T000000 DTEND:20030819T010000 DTSTART DTSTART so on my very noobs knowledge i have worked out this shitty regex! :-( preg_replace( '/^[A-Z](?!;[A-Z]=[\w\W]+):$/m' , '' , $data ); but why i'm sure this regex will not work!? :-) Pls help me! PS: the title of question is pretty explaned, i want also know how for example use a well know string block for match another... preg_replace( '/^[DTSTART](?!;[A-Z]=[\w\W]+):$/m' , '' , $data ); ..without delete DTSTART Thanks for the time! Regards Luca Filosofi

    Read the article

  • How to troubleshoot a Highcharts script that's not rendering data when date is added and hanging the JS engine with large datasets?

    - by ylluminate
    I have a Highchart JS graph that I'm building in Rails (although I don't think Ruby has real bearing on this problem unless it's the Date output format) to which I'm adding the timestamp of each datapoint. Presently the array of floats is rendering fine without timestamps, however when I add the timestamp to the series it fails to rend. What's worse is that when the series has hundreds of entries all sorts of problems arise, not the least of which is the browser entirely hanging and requiring a force quit / kill. I'm using the following to build the array of arrays data series: series1 = readings.map{|row| [(row.date.to_i * 1000), (row.data1.to_f if BigDecimal(row.data1) != BigDecimal("-1000.0"))] } This yields a result like this: series: [{"name":"Data 1","data":[[1326262980000,1.79e-09],[1326262920000,1.29e-09],[1326262860000,1.22e-09],[1326262800000,1.42e-09],[1326262740000,1.29e-09],[1326262680000,1.34e-09],[1326262620000,1.31e-09],[1326262560000,1.51e-09],[1326262500000,1.24e-09],[1326262440000,1.7e-09],[1326262380000,1.24e-09],[1326262320000,1.29e-09],[1326262260000,1.53e-09],[1326262200000,1.23e-09],[1326262140000,1.21e-09]],"color":"blue"}] Yet nothing appears on the graph as noted. Notwithstanding, when I compare the data series in one of their very similar examples here: http://www.highcharts.com/demo/spline-irregular-time It appears that really the data series are formatted identically (except in mine I use the timestamp vs date method). This leads me to think I've got a problem with the timestamp output, but I'm just not able to figure out where / how as I'm converting the date output to an integer multipled by 1000 to convert it to milliseconds as per explained in a similar Railscasts tutorial. I would very much appreciate it if someone could point me in the right direction here as to what I may be doing wrong. What could cause no data to appear on the graph in smaller sized sets (<100 points) and when into the hundreds causes an apparent hang in the javascript engine in this case? Perhaps ultimately the key lies here as this is the entire js that's being generated and not rendering: jQuery(function() { // 1. Define JSON options var options = { chart: {"defaultSeriesType":"spline","renderTo":"chart_name"}, title: {"text":"Title"}, legend: {"layout":"vertical","style":{}}, xAxis: {"title":{"text":"UTC Time"},"type":"datetime"}, yAxis: [{"title":{"text":"Left Title","margin":10}},{"title":{"text":"Right Groups Title"},"opposite":true}], tooltip: {"enabled":true}, credits: {"enabled":false}, plotOptions: {"areaspline":{}}, series: [{"name":"Data 1","data":[[1326262980000,1.79e-08],[1326262920000,1.69e-08],[1326262860000,1.62e-08],[1326262800000,1.42e-08],[1326262740000,1.29e-08],[1326262680000,1.34e-08],[1326262620000,1.31e-08],[1326262560000,1.51e-08],[1326262500000,1.64e-08],[1326262440000,1.7e-08],[1326262380000,1.64e-08],[1326262320000,1.69e-08],[1326262260000,1.53e-08],[1326262200000,1.23e-08],[1326262140000,1.21e-08]],"color":"blue"},{"name":"Data 2","data":[[1326262980000,9.79e-09],[1326262920000,9.78e-09],[1326262860000,9.8e-09],[1326262800000,9.82e-09],[1326262740000,9.88e-09],[1326262680000,9.89e-09],[1326262620000,1.3e-06],[1326262560000,1.32e-06],[1326262500000,1.33e-06],[1326262440000,1.33e-06],[1326262380000,1.34e-06],[1326262320000,1.33e-06],[1326262260000,1.32e-06],[1326262200000,1.32e-06],[1326262140000,1.32e-06]],"color":"red"}], subtitle: {} }; // 2. Add callbacks (non-JSON compliant) // 3. Build the chart var chart = new Highcharts.StockChart(options); });

    Read the article

  • How to handle large dataset with JPA (or at least with Hibernate)?

    - by Roman
    I need to make my web-app work with really huge datasets. At the moment I get either OutOfMemoryException or output which is being generated 1-2 minutes. Let's put it simple and suppose that we have 2 tables in DB: Worker and WorkLog with about 1000 rows in the first one and 10 000 000 rows in the second one. Latter table has several fields including 'workerId' and 'hoursWorked' fields among others. What we need is: count total hours worked by each user; list of work periods for each user. The most straightforward approach (IMO) for each task in plain SQL is: 1) select Worker.name, sum(hoursWorked) from Worker, WorkLog where Worker.id = WorkLog.workerId group by Worker.name; //results of this query should be transformed to Multimap<Worker, Long> 2) select Worker.name, WorkLog.start, WorkLog.hoursWorked from Worker, WorkLog where Worker.id = WorkLog.workerId; //results of this query should be transformed to Multimap<Worker, Period> //if it was JDBC then it would be vitally //to set resultSet.setFetchSize (someSmallNumber), ~100 So, I have two questions: how to implement each of my approaches with JPA (or at least with Hibernate); how would you handle this problem (with JPA or Hibernate of course)?

    Read the article

  • Can I mix static and shared-object libraries when linking?

    - by SiegeX
    I have a C project that produces ten executables, all of which I would like to be statically linked. The problem I am facing is that one of these executables uses a 3rd-party library of which only the shared-object version is available. If I pass the -static flag to gcc, ld will error saying it can't find the library in question (I presume it's looking for the .a version) and the executable will not be built. Ideally, I would like to be able to tell 'ld' to statically link as much as it can and fail over to the shared object library if a static library cannot be found. In the interium I tried something like gcc -static -lib1 -lib2 -shared -lib3rdparty foo.c -o foo.exe in hopes that 'ld' would statically link in lib1 and lib2 but only have a run-time dependence on lib3rdparty. Unfortunatly, this did not work as I intended; instead the -shared flag overwrote the -static flag and everything was compiled as shared-objects. Is statically linking an all-or-nothing deal, or is there some way I can mix and match?

    Read the article

  • Keeping a large volume of data in Session - Suggestions / alternatives?

    - by Fishcake
    I'm developing a web app for which the client wants us to query their data as little as possible. The data will be coming from a Microsoft CRM instance. So we've agreed that data will only be queried as and when it is needed, therefore if a web user wants to see a list of contacts (for example) that list is fetched into a local DataTable. Then if a new contact is created on the website the new contact is sent to CRM and added to the local DataTable at the same time. Likewise for edits. If the user then looks at their contacts again the data will just come from the local DataTable. At the moment local data is being kept in Session but my concern is that too much memory will start being used up. However traffic is expected to be pretty small, perhaps no more than 20 concurrent users so am I worrying about nothing or is there a better way you can suggest to handle this?

    Read the article

  • MongoDB RSS Feed Entries, Embed the Entries in the Feed Object?

    - by Patrick Klingemann
    I am saving a reference to an RSS Feed in MongoDB, each Feed has an ever growing list of Entries. As I'm designing my schema, I'm concerned about this statement from the MongoDB Schema Design - Embed vs. Reference Documentation: If the amount of data to embed is huge (many megabytes), you may read the limit on size of a single object. This will surely happen if I understand the statement correctly. So the question is, I am correct to assume that I should not embed the Feed Entries within a Feed because I'll eventually reach the limit on size of a single object?

    Read the article

  • How to generate one large dependency map for the whole project that builds with makefiles?

    - by Stan
    I have a gigantic project that is built using makefiles. Running make at the root of the project takes over 20 minutes when no files have changed (i.e. just traversing the project and checking for updated files). I'd like to create a dependency map that will tell me which directories I need to run 'make' in based on the file(s) changed. I already have a list of updated files that I can get from my version control system, and I'd like to skip the 20 minutes of traversing and get straight to the locations that do need to be recompiled. The project has a mix of several languages and custom tools, so this would ideally be language-independent (i.e. it would only process all makefiles to generate dependencies). I'll settle for a C/C++-specific solution, too, as the majority of the project is in C++. The project is built on Linux.

    Read the article

  • Can an Excel VBA UDF called from the worksheet ever be passed an instance of any Excel VBA object mo

    - by jtolle
    I'm 99% sure that the answer is "no", but I'm wondering if someone who is 100% sure can say so. Consider a VBA UDF: Public Function f(x) End Function When you call this from the worksheet, 'x' will be a number, string, boolean, error, array, or object of type 'Range'. Can it ever be, say, an instance of 'Chart', 'ListObject', or any other Excel-VBA object model class? (The question arose from me moving to Excel 2007 and playing with Tables, and wondering if I could write UDFs that accept them as parameters instead of Ranges. The answer to that seems to be no, but then I realized I didn't know for sure in general.)

    Read the article

  • How to structure an index for type ahead for extremely large dataset using Lucene or similar?

    - by Pete
    I have a dataset of 200million+ records and am looking to build a dedicated backend to power a type ahead solution. Lucene is of interest given its popularity and license type, but I'm open to other open source suggestions as well. I am looking for advice, tales from the trenches, or even better direct instruction on what I will need as far as amount of hardware and structure of software. Requirements: Must have: The ability to do starts with substring matching (I type in 'st' and it should match 'Stephen') The ability to return results very quickly, I'd say 500ms is an upper bound. Nice to have: The ability to feed relevance information into the indexing process, so that, for example, more popular terms would be returned ahead of others and not just alphabetical, aka Google style. In-word substring matching, so for example ('st' would match 'bestseller') Note: This index will purely be used for type ahead, and does not need to serve standard search queries. I am not worried about getting advice on how to set up the front end or AJAX, as long as the index can be queried as a service or directly via Java code. Up votes for any useful information that allows me to get closer to an enterprise level type ahead solution

    Read the article

  • How do I implement a listener pattern over RMI using Spring?

    - by predhme
    So here is a generalized version of our application desgin: @Controller public class MyController { @Autowired private MyServiceInterface myServiceInterface; @RequestMapping("/myURL") public @ResponseBody String doSomething() { MyListenerInterface listener = new MyListenerInterfaceImpl(); myServiceInterface.doThenCallListener(listener); // do post stuff } } public interface MyListenerInterface { public void callA(); public void callB(); } public class MyListenerInterfaceImpl implements MyListenerInterface { // ... omitted for clarity } public interface MyServiceInterface { public void doThenCallListener(MyListenerInterface listener); } public class MyServiceImpl { public void doThenCallListener(MyListenerInterface listener) { // do stuff listener.callA(); } } Basically I have a controller that is being called via AJAX in which I am looking to return a response as a string. However, I need to make a call to the backend (MyServiceInterface). That guy is exposed through RMI by using Spring (man that was easy). But the service method as described requires a listener to be registered for invokation completion purposes. So what I assume I need to achieve is transparently to the backend make it so that when the listener methods are called, really the call is going over RMI. I would have thought Spring would have a simple way to wrap a POJO (not a service singleton) with RMI calls. I looked through their documentation but they had nothing besides exposing services via RMI. Could someone point me in the right direction?

    Read the article

  • Why is activeX failing to create an object from a Labview executable?

    - by user360734
    Here is my scenario. I am using Quicktest Pro (VB) to create an ActiveX object from a Labview VI that I built into an executable. In the build specs of the VI I have enabled ActiveX server option (ActiveX server name: "MyLabviewProgram") and in the VI ToolsOptionsVI Server: Configuration the ActiveX box is checked. So in QTP my code is: Set IvApp = CreateObject("MyLabviewProgram.Application") Set Vi = IvApp.getVIReference("MyLabviewVI.vi") Vi.Call ParamNames, ParamVals Upon running this I get a Run Error on the first line: ActiveX component can't create object: 'MyLabviewProgram.Application' I am having trouble figuring out why it errors. From National Instruments website they have a step in on one of their community pages about "LabVIEW Executable Used as ActiveX Server". The step is after building the EXE, 5. Run the EXE at least once on the target to activate the .TLB file. I've run the executable but not sure what they mean by on the target. Does anyone have a suggestion on what I need to do to get this working?

    Read the article

  • How should I delete a child object from within a parent's slot? Possibly boost::asio specific.

    - by kaliatech
    I have written a network server class that maintains a std::set of network clients. The network clients emit a signal to the network server on disconnect (via boost::bind). When a network client disconnects, the client instance needs to be removed from the Set and eventually deleted. I would think this is a common pattern, but I am having problems that might, or might not, be specific to ASIO. I've tried to trim down to just the relevant code: /** NetworkServer.hpp **/ class NetworkServices : private boost::noncopyable { public: NetworkServices(void); ~NetworkServices(void); private: void run(); void onNetworkClientEvent(NetworkClientEvent&); private: std::set<boost::shared_ptr<const NetworkClient>> clients; }; /** NetworkClient.cpp **/ void NetworkServices::run() { running = true; boost::asio::io_service::work work(io_service); //keeps service running even if no operations // This creates just one thread for the boost::asio async network services boost::thread iot(boost::bind(&NetworkServices::run_io_service, this)); while (running) { boost::system::error_code err; try { tcp::socket* socket = new tcp::socket(io_service); acceptor->accept(*socket, err); if (!err) { NetworkClient* networkClient = new NetworkClient(io_service, boost::shared_ptr<tcp::socket>(socket)); networkClient->networkClientEventSignal.connect(boost::bind(&NetworkServices::onNetworkClientEvent, this, _1)); clients.insert(boost::shared_ptr<NetworkClient>(networkClient)); networkClient->init(); //kicks off 1st asynch_read call } } // etc... } } void NetworkServices::onNetworkClientEvent(NetworkClientEvent& evt) { switch(evt.getType()) { case NetworkClientEvent::CLIENT_ERROR : { boost::shared_ptr<const NetworkClient> clientPtr = evt.getClient().getSharedPtr(); // ------ THIS IS THE MAGIC LINE ----- // If I keep this, the io_service hangs. If I comment it out, // everything works fine (but I never delete the disconnected NetworkClient). // If actually deleted the client here I might expect problems because it is the caller // of this method via boost::signal and bind. However, The clientPtr is a shared ptr, and a // reference is being kept in the client itself while signaling, so // I would the object is not going to be deleted from the heap here. That seems to be the case. // Never-the-less, this line makes all the difference, most likely because it controls whether or not the NetworkClient ever gets deleted. clients.erase(clientPtr); //I should probably put this socket clean-up in NetworkClient destructor. Regardless by doing this, // I would expect the ASIO socket stuff to be adequately cleaned-up after this. tcp::socket& socket = clientPtr->getSocket(); try { socket.shutdown(boost::asio::socket_base::shutdown_both); socket.close(); } catch(...) { CommServerContext::error("Error while shutting down and closing socket."); } break; } default : { break; } } } /** NetworkClient.hpp **/ class NetworkClient : public boost::enable_shared_from_this<NetworkClient>, Client { NetworkClient(boost::asio::io_service& io_service, boost::shared_ptr<tcp::socket> socket); virtual ~NetworkClient(void); inline boost::shared_ptr<const NetworkClient> getSharedPtr() const { return shared_from_this(); }; boost::signal <void (NetworkClientEvent&)> networkClientEventSignal; void onAsyncReadHeader(const boost::system::error_code& error, size_t bytes_transferred); }; /** NetworkClient.cpp - onAsyncReadHeader method called from io_service.run() thread as result of an async_read operation. Error condition usually result of an unexpected client disconnect.**/ void NetworkClient::onAsyncReadHeader( const boost::system::error_code& error, size_t bytes_transferred) { if (error) { //Make sure this instance doesn't get deleted from parent/slot deferencing //Alternatively, somehow schedule for future delete? boost::shared_ptr<const NetworkClient> clientPtr = getSharedPtr(); //Signal to service that this client is disconnecting NetworkClientEvent evt(*this, NetworkClientEvent::CLIENT_ERROR); networkClientEventSignal(evt); networkClientEventSignal.disconnect_all_slots(); return; } I believe it's not safe to delete the client from within the slot handler because the function return would be ... undefined? (Interestingly, it doesn't seem to blow up on me though.) So I've used boost:shared_ptr along with shared_from_this to make sure the client doesn't get deleted until all slots have been signaled. It doesn't seem to really matter though. I believe this question is not specific to ASIO, but the problem manifests in a peculiar way when using ASIO. I have one thread executing io_service.run(). All ASIO read/write operations are performed asynchronously. Everything works fine with multiple clients connecting/disconnecting UNLESS I delete my client object from the Set per the code above. If I delete my client object, the io_service seemingly deadlocks internally and no further asynchronous operations are performed unless I start another thread. I have try/catches around the io_service.run() call and have not been able to detect any errors. Questions: Are there best practices for deleting child objects, that are also signal emitters, from within parent slots? Any ideas as to why the io_service is hanging when I delete my network client object?

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >