Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 482/537 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • Populating and Using Dynamic Classes in C#/.NET 4.0

    - by Bob
    In our application we're considering using dynamically generated classes to hold a lot of our data. The reason for doing this is that we have customers with tables that have different structures. So you could have a customer table called "DOG" (just making this up) that contains the columns "DOGID", "DOGNAME", "DOGTYPE", etc. Customer #2 could have the same table "DOG" with the columns "DOGID", "DOG_FIRST_NAME", "DOG_LAST_NAME", "DOG_BREED", and so on. We can't create classes for these at compile time as the customer can change the table schema at any time. At the moment I have code that can generate a "DOG" class at run-time using reflection. What I'm trying to figure out is how to populate this class from a DataTable (or some other .NET mechanism) without extreme performance penalties. We have one table that contains ~20 columns and ~50k rows. Doing a foreach over all of the rows and columns to create the collection take about 1 minute, which is a little too long. Am I trying to come up with a solution that's too complex or am I on the right track? Has anyone else experienced a problem like this? Create dynamic classes was the solution that a developer at Microsoft proposed. If we can just populate this collection and use it efficiently I think it could work.

    Read the article

  • Is it a good idea to use MySQL and Neo4j together?

    - by Sanoj
    I will make an application with a lot of similar items (millions), and I would like to store them in a MySQL database, because I would like to do a lot of statistics and search on specific values for specific columns. But at the same time, I will store relations between all the items, that are related in many connected binary-tree-like structures (transitive closure), and relation databases are not good at that kind of structures, so I would like to store all relations in Neo4j which have good performance for this kind of data. My plan is to have all data except the relations in the MySQL database and all relations with item_id stored in the Neo4j database. When I want to lookup a tree, I first search the Neo4j for all the item_id:s in the tree, then I search the MySQL-database for all the specified items in a query that would look like: SELECT * FROM items WHERE item_id = 45 OR item_id = 345435 OR item_id = 343 OR item_id = 78 OR item_id = 4522 OR item_id = 676 OR item_id = 443 OR item_id = 4255 OR item_id = 4345 Is this a good idea, or am I very wrong? I haven't used graph-databases before. Are there any better approaches to my problem? How would the MySQL-query perform in this case?

    Read the article

  • Mysql random rows

    - by n00b
    please read the whole question... 90% of you dont seem to do that and some of you only read the title obviously... and if you dont know the solution, dont answer - i wont have to downvote you -.-'' im entertaining the idea of getting random rows directly from mysql. what i found was SELECT * FROM tablename WHERE somefield='something' ORDER BY RAND() LIMIT 5 but even i see how slow that would be.. is the only way to do this doing something like SELECT * FROM tablename WHERE somefield='something' LIMIT RAND(aincrementvalue-5), 1 5 times? or is there a way that i with my little knowlege of databases cant come up with ? (no i dont want random indexes. i hate the idea of them...) @commenters - please first look, then think, then look again, think again and then post. i wont point fingers but i dislike stupid comments and why i think random indexes are a nasty hack ? it doesnt give you random results. it gives you x results from a random index in a predefined order its like a gapless id only in the wrong order if you fetch by 1 row and get true randomness you fall back to my method but with an additional junk field finally the reason the field exists is only to serve as a helper to something that can be done without it with almost same performance (but the quality (randomness) is better), so it is a nasty hack ;) i solved it, look @ my answer... if you think its incorrect please tell me :)

    Read the article

  • Seeking suggestions on redesigning the interface

    - by ratkok
    As a part of maintaining large piece of legacy code, we need to change part of the design mainly to make it more testable (unit testing). One of the issues we need to resolve is the existing interface between components. The interface between two components is a class that contains static methods only. Simplified example: class ABInterface { static methodA(); static methodB(); ... static methodZ(); }; The interface is used by component A so that different methods can use ABInterface::methodA() in order to prepare some input data and then invoke appropriate functions within component B. Now we are trying to redesign this interface for various reasons: Extending our unit test coverage - we need to resolve this dependency between the components and stubs/mocks are to be introduced The interface between these components diverged from the original design (ie. a lots of newer functions, used for the inter-component i/f are created outside this interface class). The code is old, changed a lot over the time and needs to be refactored. The change should not be disruptive for the rest of the system. We try to limit leaving many test-required artifacts in the production code. Performance is very important and should be no (or very minimal) degradation after the redesign. Code is OO in C++. I am looking for some ideas what approach to take. Any suggestions on how to do this efficiently?

    Read the article

  • What noncluster index would be better to create on SQL Server?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. SELECT CustomerName FROM Customers Which leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 For the first attempt to improve performance, I've created a nonclustered index for this table: CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this first try, I've executed the select statement and the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 Now the second try, I've deleted this nonclustered index in order to create a new one. CREATE NONCLUSTERED INDEX [IX_CategoryName] ON [dbo].[Categories] ( [CategoryId] ASC ) INCLUDE ( [CategoryName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this second try, I've executed the select statement and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 Am I doing something wrong or this is expected? Shall I use the first nonclustered index with two fields, or the second nonclustered with one field (CategoryID) including the second field (CategoryName)?

    Read the article

  • What hash algorithms are paralellizable? Optimizing the hashing of large files utilizing on mult-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Thread-safety of read-only memory access

    - by Edmund
    I've implemented the Barnes-Hut gravity algorithm in C as follows: Build a tree of clustered stars. For each star, traverse the tree and apply the gravitational forces from each applicable node. Update the star velocities and positions. Stage 2 is the most expensive stage, and so is implemented in parallel by dividing the set of stars. E.g. with 1000 stars and 2 threads, I have one thread processing the first 500 stars and the second thread processing the second 500. In practice this works: it speeds the computation by about 30% with two threads on a two-core machine, compared to the non-threaded version. Additionally, it yields the same numerical results as the original non-threaded version. My concern is that the two threads are accessing the same resource (namely, the tree) simultaneously. I have not added any synchronisation to the thread workers, so it's likely they will attempt to read from the same location at some point. Although access to the tree is strictly read-only I am not 100% sure it's safe. It has worked when I've tested it but I know this is no guarantee of correctness! Questions Do I need to make a private copy of the tree for each thread? Even if it is safe, are there performance problems of accessing the same memory from multiple threads?

    Read the article

  • Groovy as a substitute for Java when using BigDecimal?

    - by geejay
    I have just completed an evaluation of Java, Groovy and Scala. The factors I considered were: readability, precision The factors I would like to know: performance, ease of integration I needed a BigDecimal level of precision. Here are my results: Java void someOp() { BigDecimal del_theta_1 = toDec(6); BigDecimal del_theta_2 = toDec(2); BigDecimal del_theta_m = toDec(0); del_theta_m = abs(del_theta_1.subtract(del_theta_2)) .divide(log(del_theta_1.divide(del_theta_2))); } Groovy void someOp() { def del_theta_1 = 6.0 def del_theta_2 = 2.0 def del_theta_m = 0.0 del_theta_m = Math.abs(del_theta_1 - del_theta_2) / Math.log(del_theta_1 / del_theta_2); } Scala def other(){ var del_theta_1 = toDec(6); var del_theta_2 = toDec(2); var del_theta_m = toDec(0); del_theta_m = ( abs(del_theta_1 - del_theta_2) / log(del_theta_1 / del_theta_2) ) } Note that in Java and Scala I used static imports. Java: Pros: it is Java Cons: no operator overloading (lots o methods), barely readable/codeable Groovy: Pros: default BigDecimal means no visible typing, least surprising BigDecimal support for all operations (division included) Cons: another language to learn Scala: Pros: has operator overloading for BigDecimal Cons: some surprising behaviour with division (fixed with Decimal128), another language to learn

    Read the article

  • customer.name joining transactions.name vs. customer.id [serial] joining transactions.id [integer]

    - by Frank Computer
    INFORMIX-SQL 7.32 Pawnshop Application: one-to-many relationship where each customer (master) can have many transactions (detail). customer( id serial, pk_name char(30), {PATERNAL-NAME MATERNAL-NAME, FIRST-NAME MIDDLE-NAME} [...] ); unique index on id; unique cluster index on name; transaction( fk_name char(30), ticket_number serial, [...] ); dups cluster index on fk_name; unique index on ticket_number; Several people have told me this is not the correct way to join master to detail. They said I should always join customer.id[serial] to transactions.id[integer]. When a customer pawns merchandise, clerk queries the master using wildcards on name. The query usually returns several customers, clerk scrolls until locating the right name, enters a 'D' to change to detail transactions table, all transactions are automatically queried, then clerk enters an 'A' to add a new transaction. The problem with using customer.id joining transaction.id is that although the customer table is maintained in sorted name order, clustering the transaction table by fk_id groups the transactions by fk_id, but they are not in the same order as the customer name, so when clerk is scrolling through customer names in the master, the system has to jump allover the place to locate the clustered transactions belonging to each customer. As each new customer is added, the next id is assigned to that customer, but new customers dont show up in alphabetical order. I experimented using id joins and confirmed the decrease in performance. How can I use id joins instead of name joins and still preserve the clustered transaction order by name if transactions has no name column?

    Read the article

  • Java Executor: Small tasks or big ones?

    - by Arash Shahkar
    Consider one big task which could be broken into hundreds of small, independently-runnable tasks. To be more specific, each small task is to send a light network request and decide upon the answer received from the server. These small tasks are not expected to take longer than a second, and involve a few servers in total. I have in mind two approaches to implement this using the Executor framework, and I want to know which one's better and why. Create a few, say 5 to 10 tasks each involving doing a bunch of send and receives. Create a single task (Callable or Runnable) for each send & receive and schedule all of them (hundreds) to be run by the executor. I'm sorry if my question shows that I'm lazy to test these and see for myself what's better (at least performance-wise). My question, while looking after an answer to this specific case, has a more general aspect. In situations like these when you want to use an executor to do all the scheduling and other stuff, is it better to create lots of small tasks or to group those into a less number of bigger tasks?

    Read the article

  • Async task ASP.net HttpContext.Current.Items is empty - How do handle this?

    - by GuruC
    We are running a very large web application in asp.net MVC .NET 4.0. Recently we had an audit done and the performance team says that there were a lot of null reference exceptions. So I started investigating it from the dumps and event viewer. My understanding was as follows: We are using Asyn Tasks in our controllers. We rely on HttpContext.Current.Items hashtable to store a lot of Application level values. Task<Articles>.Factory.StartNew(() => { System.Web.HttpContext.Current = ControllerContext.HttpContext.ApplicationInstance.Context; var service = new ArticlesService(page); return service.GetArticles(); }).ContinueWith(t => SetResult(t, "articles")); So we are copying the context object onto the new thread that is spawned from Task factory. This context.Items is used again in the thread wherever necessary. Say for ex: public class SomeClass { internal static int StreamID { get { if (HttpContext.Current != null) { return (int)HttpContext.Current.Items["StreamID"]; } else { return DEFAULT_STREAM_ID; } } } This runs fine as long as number of parallel requests are optimal. My questions are as follows: 1. When the load is more and there are too many parallel requests, I notice that HttpContext.Current.Items is empty. I am not able to figure out a reason for this and this causes all the null reference exceptions. 2. How do we make sure it is not null ? Any workaround if present ? NOTE: I read through in StackOverflow and people have questions like HttpContext.Current is null - but in my case it is not null and its empty. I was reading one more article where the author says that sometimes request object is terminated and it may cause problems since dispose is already called on objects. I am doing a copy of Context object - its just a shallow copy and not a deep copy.

    Read the article

  • Start with remoting or with WCF

    - by Sheldon
    Hi. I'm just starting with distributed application development. I need to create (all by myself) an enterprise application for document management. That application will run on an intranet (within the firewall, no internet access is required now, BUT is probably that will be later). The application needs to manage images that will be stored within MySQL Server (as blobs) and those images will be then recovered by the app and eventually one or more of them will be converted to PDF. Performance is the most important non-functional requirement. I have a couple of doubts. What do you suggest to use, .NET Remoting or WCF over TCP-IP (I think second one is the best for the moment I need to expose the business logic over internet, changing the protocol). Where do you suggest to make the transformation of the images to pdf files, I'm using iText. (I have thought to have the business logic stored within the IIS and exposed via WCF, and that business logic to be responsible of getting the images and transforming them to PDF, that because the IIS and the MySQL Server are the same physical machine). I ask about where to do the transformation because the app must be accessible from multiple devices, and for example, for mobile devices, the pdf maybe is not necessary. Thank you very much in advance.

    Read the article

  • MS SQL - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • Anyone NOT using a Web Framework? Why?

    - by tom
    I'm well aware of the many reasons to use a web framework. I'm just wondering whether anyone out there is using absolutely no web framework whatsoever to develop their web projects. I would really love to know the reason(s) why you're not using a web framework. For the sake of this discussion, your programming language of choice does not matter. Some possibilities for discussion: You don't hide behind an ORM. You don't rely on any sort of templating system. You think MVC is a really nice TLA but lacks an essential vowel or two. No need for any additional javascript framework tomfoolery. You just write as much code as possible in your native programming language(s). Summary of reasons thus far: Language learning opportunities. Specific performance reasons (write-intensive transaction processing). Seeking more nuanced control over your data and applications (less abstraction). You're building your own framework! Prove to yourself that you can succeed (or fail) just like the big framework-building gurus. Integration issues with unpopular/legacy technologies (exotic databases or protocols come to mind). Big company, lots of code, no talent nor buy-in present to move to a web framework. Some frameworks really lock you in and cannot perpetually grow along with your needs. These few black sheep don't make it easy to jump outside of the framework, write some custom code, and easily jump back in. When you finally escape the asylum, you'll never look back.

    Read the article

  • IIS7 dynamic content compression and webservices

    - by vandalo
    I am moving and old asmx webservice to a new server with IIS7. This webservice basically sends a big dataset (10mb+) to a winform application. The old solution was implemented using a custom soap extension which compressed the content before sending the stream to the client. The client, of course, implemented the same custom soap extension, to decompressed the stream in a dataset. Everything has worked pretty well for years. My customer doesn't want to change the code upgrading to WCF. They just want to put the old App on the new server and use the new dynamic content compression features. We're testing things on a test server (win serv 2008) and it seems that it's working pretty well, even if it seems slow: we can't see any difference in performance (speed) between the uncompressed and compressed stream. Here's the question. Where should I put the settings? Most people say I can't put it in my web.config; others say it can be put there. I am a bit confused. Are there any tricks or things I should know? What about mimeTypes? Should I set some parameters, somewhere? ... considering my stream is XML (dataset) ?? Thanks to everyone who would like to help Alberto

    Read the article

  • ASP.Net MVC DotNetOpenAuth Sample Issue on publish

    - by Roger D. Pharr
    I'm trying to use the MSDN Open ID project template for ASP.NET MVC C#. I've been able to configure a local copy to run well. But when I publish it to my hosting provider - it craps out. The error is "500 internal server error". Is there something I should know about publishing this template that I haven't noticed? Here are some more details (for diligence): Hosting provider is GoDaddy/SQL2005/IIS7. When I configure & publish the blank MVC template, it works. The local database publishes successfully, but I haven't been able to troubleshoot the connection in web.config yet. I expect there are connection string problems in the file. I tried disabling all of the references to log4net, as it seemed to be invoked several times on startup. But those changes did not seem to make a difference in either the local or published application performance. My IDE is Visual Studio 2010 Pro Any help would be greatly appreciated!

    Read the article

  • Advanced search engine or server for relational database [closed]

    - by Pawel
    In my current project we are storing big volume of data in relational database. One of the recent key requirements is to enrich application by adding some advanced search capabilities. In the Project, performance is one of the important factors due to very large tables (10+ milions of records) with parent-children relations (for example: multi-level parent-child relationship, where I am looking for all parents with specific children). The search engine should also be able to check these references for hits. I have found some potential engines on stack overflow, however it looks like that all of them are dedicated rather for text search than relational db and hosted on linux os: lucene Solr Sphinx As I understand some of them use documents as a source of searching, but is it possible or efficient to create programmaticaly documents based on my relational data? As I am not familiar with all of their features/capabilities can anyone please make some recommendations or propose some different solution? To summarize my requirements: framework/engine to search relational database including decendants. support for Microsoft SQL Server can be used in .NET applications preferably hosted on Windows systems Does any of mentioned above are able to solve my problem? do you know any better solution?

    Read the article

  • AJAX or a server side framework?

    - by Romansky
    I am working with a friend on building a web site, in general this web site will be a custom web app along with a very custom social network type of thing.. Currently I have a mock-up site that uses simple PHP with AJAX and JSON and JQUERY and I love how it works, I love the way it all fits together. But for a mock-up I did not implement any of the Social Network design patterns such as a login, rating, groups etc.. This brought me to a higher level of decision making requirement, I need to decide if I want to develop all this functionality by hand or use some kind of a framework. I spent this entire day researching, and it would seem that using Drupal and such frameworks will make the Social Network part easy (overlooking the customization requirement for now..) but will make client side Web App development less so. I found some other frameworks that are more developer friendly (customizable) such as Zend and Symfony etc.. but these seem to take allot of the power from the client and implement it in the server side, to me this seems a waste (and an unjustified performance bottleneck) .. Finally I found Aptana Jaxer framework that seems to think the same way I feel. That said it seems a bit under-developed, I didn't find modules for a social network and the community around it seems thin.. (searching Jaxer in StackOverflow returns few results) So other then making server side DB comm a bit simpler it does not help me greatly.. My requirements are a good facility to develop web apps on while containing all the user centric logic usually used for social networks in advance. What would you recommend? EDIT: OK, lats fine tune this question, after considering this abit further, is there a good down loadable source of a social network site in PHP that I can work around in building my web app? (I really like using JQUERY AJAX JSON etc..)

    Read the article

  • Help regarding database and logic layer for my ASP.NET MVC application

    - by Ismail S
    I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things. I've a good years of experience working on SQL Server databases and on one project I've had a bad experience creating and managing stored procedures on MySQL database. I'm totally new to Linq but I see that it is easier to use once you are familiar with it. First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. I would love if I can implement repository pattern. Will it be possible if I use Entity Framework? I'm not clear on how I should go about all this or I should just forget every thing and directly use SQL to Linq on SQL Server. I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right?

    Read the article

  • How to access a web service behind a NAT?

    - by jr
    We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses. The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc. I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic. Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc. Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step. Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier. What else should I be looking at to get this accomplished?

    Read the article

  • Using array of Action() in a lambda expression

    - by Sean87
    I want to do some performance measurement for a method that does some work with int arrays, so I wrote the following class: public class TimeKeeper { public TimeSpan Measure(Action[] actions) { var watch = new Stopwatch(); watch.Start(); foreach (var action in actions) { action(); } return watch.Elapsed; } } But I can not call the Measure mehotd for the example below: var elpased = new TimeKeeper(); elpased.Measure( () => new Action[] { FillArray(ref a, "a", 10000), FillArray(ref a, "a", 10000), FillArray(ref a, "a", 10000) }); I get the following errors: Cannot convert lambda expression to type 'System.Action[]' because it is not a delegate type Cannot implicitly convert type 'void' to 'System.Action' Cannot implicitly convert type 'void' to 'System.Action' Cannot implicitly convert type 'void' to 'System.Action' Here is the method that works with arrays: private void FillArray(ref int[] array, string name, int count) { array = new int[count]; for (int i = 0; i < array.Length; i++) { array[i] = i; } Console.WriteLine("Array {0} is now filled up with {1} values", name, count); } What I am doing wrong?

    Read the article

  • java.awt.Robot.keyPress for continuous keystrokes

    - by Deb
    So, here's my problem. I have a java program which will send keystroke messages to a game (built in Unity), based on how the user interacts with an android phone. (My java program is a listener for the android interaction over wi-fi) Now, in order to do this, I am using java.awt.Robot to send keyPresses to the game window. I have the following code block written in my listener program: if(interacting) { Robot robot = new Robot(); robot.keyPress(VK_A); robot.delay(20); //to simulate the normal keyboard rate } Now the variable interacting will be true as long as the user presses down on the touch screen of the phone, and what I intend to achieve is a continuous chain of keystroke messages being delivered to the game (through the listener). However, this is severely affecting performance, for some reason. I am noticing that the game becomes slow (rapidly dropping frame rates), and even the computer becomes slow, in general. What's going wrong? Should I use a robot.keyRelease(VK_A) after each keyPress? But my game has a different action mapped to the release of a key, and I do not want rapid key presses and releases; what I really want is to simulate continuous keystrokes, in exactly the way it would behave if the user were pressing down the A key on their keyboard manually. Please help.

    Read the article

  • How To perform a SQL Query to DataTable Operation That Can Be Cancelled

    - by David W
    I tried to make the title as specific as possible. Basically what I have running inside a backgroundworker thread now is some code that looks like: SqlConnection conn = new SqlConnection(connstring); SqlCommand cmd = new SqlCommand(query, conn); conn.Open(); SqlDataAdapter sda = new SqlDataAdapter(cmd); sda.Fill(Results); conn.Close(); sda.Dispose(); Where query is a string representing a large, time consuming query, and conn is the connection object. My problem now is I need a stop button. I've come to realize killing the backgroundworker would be worthless because I still want to keep what results are left over after the query is canceled. Plus it wouldn't be able to check the canceled state until after the query. What I've come up with so far: I've been trying to conceptualize how to handle this efficiently without taking too big of a performance hit. My idea was to use a SqlDataReader to read the data from the query piece at a time so that I had a "loop" to check a flag I could set from the GUI via a button. The problem is as far as I know I can't use the Load() method of a datatable and still be able to cancel the sqlcommand. If I'm wrong please let me know because that would make cancelling slightly easier. In light of what I discovered I came to the realization I may only be able to cancel the sqlcommand mid-query if I did something like the below (pseudo-code): while(reader.Read()) { //check flag status //if it is set to 'kill' fire off the kill thread //otherwise populate the datatable with what was read } However, it would seem to me this would be highly ineffective and possibly costly. Is this the only way to kill a sqlcommand in progress that absolutely needs to be in a datatable? Any help would be appreciated!

    Read the article

  • Data structure to build and lookup set of integer ranges

    - by actual
    I have a set of uint32 integers, there may be millions of items in the set. 50-70% of them are consecutive, but in input stream they appear in unpredictable order. I need to: Compress this set into ranges to achieve space efficient representation. Already implemented this using trivial algorithm, since ranges computed only once speed is not important here. After this transformation number of resulting ranges is typically within 5 000-10 000, many of them are single-item, of course. Test membership of some integer, information about specific range in the set is not required. This one must be very fast -- O(1). Was thinking about minimal perfect hash functions, but they do not play well with ranges. Bitsets are very space inefficient. Other structures, like binary trees, has complexity of O(log n), worst thing with them that implementation make many conditional jumps and processor can not predict them well giving poor performance. Is there any data structure or algorithm specialized in integer ranges to solve this task?

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >