Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 547/763 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • What should a Java/SOA developer be able to do?

    - by Regular Joe
    Hello community. I got assigned the task to list the activities a Java Developer should be able to perform and create an estimate about the time it would take. I've came up with the following: S = Small complexity M = Medium complexity H = High complexity 1d = 1 day Create JDBC CRUD backend ( S=1d, M=5d, H=10d ) Create JSP/Servlet frontend for a CRUD app ( S=1d, M=10d, H=20d ) Create Swing desktop frontend ( S=1d, M=15d, H=30d) Create ORM based CRUD ... Create Webapp fronend with webframework ... This is thought for a Java "enterprise" developer. The other profile I have is SOA Developer, but I could not pass beyond: Create webservice ( S=.5d, M=2d, H=7d ) Q.- What other activities should a Java Developer be able to do? Q.- What activities should a SOA Developer be able to do? Please, help me with this, I know this is in the limit of the kind of questions that could be asked here, but I really need a little push on this, and I don't want to go to Yahoo Answers for this.

    Read the article

  • DataReader - hardcode ordinals?

    - by David Neale
    When returning data from a DataReader I would typically use the ordinal reference on the DataReader to grab the relevant column: if (dr.HasRows) Console.WriteLine(dr[0].ToString()); (OR dr.GetString(0); OR (string)dr[0];)... I have always done this because I was advised at an early stage that using dr["ColumnName"] or a more elegant way of indexing causes a performance hit. However, whilst everything is becoming increasingly strongly-typed I feel more uncomfortable with this. I'm also aware that the above does not check for DBNull. How should data be returned from a DataReader?

    Read the article

  • Image Application in WPF and Perfomance.

    - by Harsha
    Hello All, I am planning to build Image processing application using WPF. Brightness /Contrast and Histogram are main operation of this application. I have downloaded the application " Foundations: Bitmaps and Pixel Bits" from http://msdn.microsoft.com/en-us/magazine/cc534995.aspx . But when I tried to open the images which are more than 1200x1600, It is very slow. How to increase the performance. Is any one worked on Image processing in WPF. Please suggest me how to solve this perfomance issue in WPF for image(more than 1600x1200) operation. Thanks you, Harsha

    Read the article

  • Can I draw Qt objects directly to Win32 DC (Device Context)?

    - by Kevin
    I can draw Qt objects to an QImage and then draw the image to HDC or CDC. This may hurt our application's performance. It would be great if I can draw Qt objects directly to Win32 HDC or MFC CDC. I expect that there is a class, say QWin32Image for clear, then I can use it in this way: QWin32Image image(hdc, 100, 100, Format_ARGB32_Premultiplied); QPainter painter(&image); painter.drawText(....); Is it possible for my thought? Or is there a better way to do that?

    Read the article

  • Query optimization

    - by Lijo
    Hi Team, I am trying to learn SQL query optimization using SQL Server 2005. However, I did not find any practical example where I can improve performance by tweaking query alone. Can you please list some example queries - prior and after optimization? It has to be query tuning – not adding index, not making covering index, not making any alteration to table. No change is supposed to be done on table and index. [ I understand that indexes are important. This is only for learning purpose] It would be great if you can exaplain using temp table instead of refering any sample databases like adventureworks. Expecting your support... Thanks Lijo

    Read the article

  • In LINQ-SQL, wrap the DataContext is an using statement - pros cons

    - by hIpPy
    Can someone pitch in their opinion about pros/cons between wrapping the DataContext in an using statement or not in LINQ-SQL in terms of factors as performance, memory usage, ease of coding, right thing to do etc. Update: In one particular application, I experienced that, without wrapping the DataContext in using block, the amount of memory usage kept on increasing as the live objects were not released for GC. As in, in below example, if I hold the reference to List of q object and access entities of q, I create an object graph that is not released for GC. DataContext with using using (DBDataContext db = new DBDataContext()) { var q = from x in db.Tables where x.Id == someId select x; return q.toList(); } DataContext without using and kept alive DBDataContext db = new DBDataContext() var q = from x in db.Tables where x.Id == someId select x; return q.toList(); Thanks.

    Read the article

  • using Excel VBA, given the daily price of 50 stocks, choose 10 stocks such that they have the minumu

    - by correl
    The high-level goal is to choose 10 stocks that have the lowest correlation among one another, out of a pool of 50, so that I can have a well-diversified portfolio. I have managed to write some VBA macro to download the past 3 years of daily price data from Yahoo finance, and then compute the 50x50 correlation matrix (using the Correl function), using the daily close as the data. What I have tried so far is just some local-maximum heuristic: - For the two stocks that have the highest correlation with each other, remove one of them. Between the two, remove the one that has the higher average correlation with all the other stocks. - When I remove a stock from the pool, I just delete the correponding row and column, to give a smaller matrix. - Repeat until I have just 10 stocks remaining (a 10x10 matrix). I was wondering if there is some algorithm that already solves such a problem and gives the optimum solution?

    Read the article

  • PHP/MySQL database connection priority?

    - by Josh
    Hello, I have a production database where usage statistics for a service we're working on. I use php to periodically roll up different resolutions (day, week, month, year) of interesting statistics in buckets dictated by the resolution. The php application I've written "completes" its data when its run, such that it will calculate all the rolled-up statistics for the resolutions and periods since it was last run. This is useful if we want to turn this off to debug database performance issues, because I can turn it back on and have it complete its data set. The problem I have, is the calculations are fairly intensive and drive the QPS of the production database server up. Is there a way to set a "priority" on a particular database connection so that it will only use "off-cycles" to do these calculations? Maybe a proper response would be to replicate the tables I'm working on into a different stats database, but, unfortunately I don't have the resources in place to attempt such a thing (yet). Thanks in advance for any help, Josh

    Read the article

  • WCF VS. Sockets

    - by kite
    Hello, I would like to know which of WCF or .NET Sockets is the more efficient and the more recommended in a game developpment scenario. Here are the different parts of the game : -a client/server communication to play on the internet -peer to peer on local network. I would like to know which technology you would use on these parts (wcf on both, socket on both, wcf on one and socket on the other...) and why, if possible. The game involved doesn't require a high communication frequency (3-4 per second is enough). Thanks, KiTe

    Read the article

  • Can't connect to samba

    - by Rick
    Windows 7, connecting to Samba shares I have a follow up question from the link above. I am running Samba 3.0.23d on FreeBSD is release 7.1 I changed the policies as described above but still cannot connect to the samba server with the windows 7 or a server 2008. I feel it is a problem with recognizing the new machines on the network. the windows machines can see the samba server, but cannot connect to it or view any of the files. After changing the security policies the samba server asked for network id and password but would not allow the machine to connect, said they were unknown username or bad password. Here is my current config file. there is no sign of encryption anywhere, should I just add the line? not sure what that would do elsewhere. Workgroup = WWOFFSET server string = WWO File Server (%v) security = server username map = /usr/local/etc/smb.users hosts allow = 10. 127. # If you want to automatically load your printer list rather # than setting them up individually then you'll need this ; load printers = yes # you may wish to override the location of the printcap file ; printcap name = /etc/printcap # on SystemV system setting printcap name to lpstat should allow # you to automatically obtain a printer list from the SystemV spool # system ; printcap name = lpstat # It should not be necessary to specify the print system type unless # it is non-standard. Currently supported print systems include: # bsd, cups, sysv, plp, lprng, aix, hpux, qnx ; printing = cups # Uncomment this if you want a guest account, you must add this to /etc/passwd # otherwise the user "nobody" is used ; guest account = pcguest # this tells Samba to use a separate log file for each machine # that connects log file = /var/log/samba/log.%m # Put a capping on the size of the log files (in Kb). max log size = 50 # Use password server option only with security = server # The argument list may include: # password server = My_PDC_Name [My_BDC_Name] [My_Next_BDC_Name] # or to auto-locate the domain controller/s # password server = * ; password server = <NT-Server-Name> password server = SERVER0 # Use the realm option only with security = ads # Specifies the Active Directory realm the host is part of ; realm = MY_REALM # Backend to store user information in. New installations should # use either tdbsam or ldapsam. smbpasswd is available for backwards # compatibility. tdbsam requires no further configuration. ; passdb backend = tdbsam ; passdb backend = smbpasswd # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting. # Note: Consider carefully the location in the configuration file of # this line. The included file is read at that point. ; include = /usr/local/etc/smb.conf.%m # Most people will find that this option gives better performance. # See the chapter 'Samba performance issues' in the Samba HOWTO Collection # and the manual pages for details. # You may want to add the following on a Linux system: # SO_RCVBUF=8192 SO_SNDBUF=8192 socket options = TCP_NODELAY # Configure Samba to use multiple interfaces # If you have multiple network interfaces then you must list them # here. See the man page for details. ; interfaces = 192.168.12.2/24 192.168.13.2/24 # Browser Control Options: # set local master to no if you don't want Samba to become a master # browser on your network. Otherwise the normal election rules apply ; local master = no # OS Level determines the precedence of this server in master browser # elections. The default value should be reasonable ; os level = 33 # Domain Master specifies Samba to be the Domain Master Browser. This # allows Samba to collate browse lists between subnets. Don't use this # if you already have a Windows NT domain controller doing this job ; domain master = yes # Preferred Master causes Samba to force a local browser election on startup # and gives it a slightly higher chance of winning the election ; preferred master = yes # Enable this if you want Samba to be a domain logon server for # Windows95 workstations. ; domain logons = yes # if you enable domain logons then you may want a per-machine or # per user logon script # run a specific logon batch file per workstation (machine) ; logon script = %m.bat # run a specific logon batch file per username ; logon script = %U.bat # Where to store roving profiles (only for Win95 and WinNT) # %L substitutes for this servers netbios name, %U is username # You must uncomment the [Profiles] share below ; logon path = \\%L\Profiles\%U # Windows Internet Name Serving Support Section: # WINS Support - Tells the NMBD component of Samba to enable it's WINS Server ; wins support = yes # WINS Server - Tells the NMBD components of Samba to be a WINS Client # Note: Samba can be either a WINS Server, or a WINS Client, but NOT both ; wins server = w.x.y.z # WINS Proxy - Tells Samba to answer name resolution queries on # behalf of a non WINS capable client, for this to work there must be # at least one WINS Server on the network. The default is NO. ; wins proxy = yes # DNS Proxy - tells Samba whether or not to try to resolve NetBIOS names # via DNS nslookups. The default is NO. dns proxy = no # charset settings ; display charset = ASCII ; unix charset = ASCII ; dos charset = ASCII # These scripts are used on a domain controller or stand-alone # machine to add or delete corresponding unix accounts ; add user script = /usr/sbin/useradd %u ; add group script = /usr/sbin/groupadd %g ; add machine script = /usr/sbin/adduser -n -g machines -c Machine -d /dev/null -s /bin/false %u ; delete user script = /usr/sbin/userdel %u ; delete user from group script = /usr/sbin/deluser %u %g ; delete group script = /usr/sbin/groupdel %g unix extensions = no

    Read the article

  • Entity Framework, full-text search and temporary tables

    - by markus
    I have a LINQ-2-Entity query builder, nesting different kinds of Where clauses depending on a fairly complex search form. Works great so far. Now I need to use a SQL Server fulltext search index in some of my queries. Is there any chance to add the search term directly to the LINQ query, and have the score available as a selectable property? If not, I could write a stored procedure to load a list of all row IDs matching the full-text search criteria, and then use a LINQ-2-Entity query to load the detail data and evaluate other optional filter criteria in a loop per row. That would be of course a very bad idea performance-wise. Another option would be to use a stored procedure to insert all row IDs matching the full-text search into a temporary table, and then let the LINQ query join the temporary table. Question is: how to join a temporary table in a LINQ query, as it cannot be part of the entity model?

    Read the article

  • Using a DataSet instead of custom business entities in soa and n-tier architecture

    - by kathy
    I’m working on a large and a high volume transactional enterprise application which has been designed using n-tire application architecture .And it was developed in the .NET platform utilizing C#,VB.NEt, Framework 3.5, ObjectDataSources, DataSet, WCF, asp.net update panel, JavaScript ,JSON, 3rd Party tools. The application is supposed to accomplish a really scalable / easily maintained / robust application / integrations, and to make sure that my services are created using a format that can be understood by other systems. The problem is, this application is about 70% complete but now I was wondering if the following would cause us future issues, I’m using a DataSet and a DataTable to (get /set) the data (form /to) the stored procedure in the database using the ObjectDataSources and was wondering if this would prevent my application from achieving the above goals. Actually, I am not anti-OO. I write lots of classes for different purposes, but I didn’t use the entity objects(custom business entities) instead of the previous way because I have a large database that may contain 50 tables and I was just afraid to create entities for each table and then in the future if I need to change the schema of the database, it might cause a huge affect on the application ?

    Read the article

  • Alternative to Nested Loop For Comparison

    - by KGVT
    I'm currently writing a program that needs to compare each file in an ArrayList of variable size. Right now, the way I'm doing this is through a nested code loop: if(tempList.size()>1){ for(int i=0;i<=tempList.size()-1;i++) //Nested loops. I should feel dirty? for(int j=i+1;j<=tempList.size()-1;j++){ //*Gets sorted. System.out.println(checkBytes(tempList.get(i), tempList.get(j))); } } I've read a few differing opinions on the necessity of nested loops, and I was wondering if anyone had a more efficient alternative. At a glance, each comparison is going to need to be done, either way, so the performance should be fairly steady, but I'm moderately convinced there's a cleaner way to do this. Any pointers?

    Read the article

  • Custom broadcast events in AS3?

    - by Ender
    In Actionscript 3, most events use the capture/target/bubble model, which is pretty popular nowadays: When an event occurs, it moves through the three phases of the event flow: the capture phase, which flows from the top of the display list hierarchy to the node just before the target node; the target phase, which comprises the target node; and the bubbling phase, which flows from the node subsequent to the target node back up the display list hierarchy. However, some events, such as the Sprite class's enterFrame event, do not capture OR bubble - you must subscribe directly to the target to detect the event. The documentation refers to these as "broadcast events." I assume this is for performance reasons, since these events will be triggered constantly for each sprite on stage and you don't want to have to deal with all that superfluous event propagation. I want to dispatch my own broadcast events. I know you can prevent an event from bubbling (Event.bubbles = false), but can you get rid of capture as well?

    Read the article

  • Daubechies-4 Transform in MATLAB

    - by Myx
    Hello: I have a 4x4 matrix which I wish to decompose into 4 frequency bands (LL, HL, LH, HH where L=low, H=high) by using a one-level Daubechies-4 wavelet transform. As a result of the transform, each band should contain 2x2 coefficients. How can I do this in MATLAB? I know that MATLAB has dbaux and dbwavf functions. However, I'm not sure how to use them and I also don't have the wavelet toolbox. Any help is greatly appreciated. Thanks.

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay

    Read the article

  • Efficiently Combine MatchCollections in .Net Regex

    - by Laramie
    In the simplified example, there are 2 Regular Expressions, one case sensitive, the other not. The idea would be to efficiently create an IEnumerable collection (see "combined" below) combining the results. string test = "abcABC"; string regex = "(?<grpa>a)|(?<grpb>b)|(?<grpc>c)]"; Regex regNoCase = new Regex(regex, RegexOptions.IgnoreCase); Regex regCase = new Regex(regex); MatchCollection matchNoCase = regNoCase.Matches(test); MatchCollection matchCase = regCase.Matches(test); //Combine matchNoCase and matchCase into an IEnumerable IEnumerable<Match> combined= null; foreach (Match match in combined) { //Use the Index and (successful) Groups properties //of the match in another operation } In practice, the MatchCollections might contain thousands of results and be run frequently using long dynamically created REGEXes, so I'd like to shy away from copying the results to arrays, etc. I am still learning LINQ and am fuzzy on how to go about combining these or what the performance hits to an already sluggish process will be.

    Read the article

  • best practices for setting development environment

    - by Sharique
    I use Linux as primary OS. I need some suggestions regarding how should I set up my desktop and development. I do work on mostly .Net and Drupal, but some time on other lamp products and C/C++, Qt. I'm also interested in mobile (android..) and embedded development. Currently I install everything on my main OS, even I use it a little. I use VMs a little. Should I use separate VM for each kind of development (like one for .Net/Mono, another C++, one for mobile and one for db only, one for xyz things etc) Keep primary development environment on main os and moveothers in VM. main os should be messed up keep things easy to organize (must) performance should be optimal

    Read the article

  • Core Data iPad/iPhone BLOBS vs File system for 20k PDFs

    - by jamone
    I'm designing an iPad/iPhone app using core data. The main focus of the app is sorting and viewing up to 20,000 PDFs They are ~200KB each. Typically its best to not store BLOBS in a DB, but for desktop systems I've typically seen it said that if the blobs are < 1 MB then its fine to use the DB. Any considerations I should take into count? If I store them in the file system can I store them all in one directory and not have performance issues (I won't need to ever get a directory list since I'd store each's path in the DB)? Should I divide them among a handful of directories? If so is there a good rule on # of files per dir?

    Read the article

  • How can I turn on DynamicCompression feature of IIS programmatically?

    - by LockeVN
    I'm making an installer program for my web application. My web application uses CSS and JS heavily, so I want to enable both Static and Dynamic HttpCompression for IIS7/7.5. It needs 2 steps: I can modified the web.config, put <httpcompression> tag, it's ok. DynamicContentCompression must be turned on in Windows Feature to make httpCompression work. Static HttpCompression is enable by default in IIS7 and IIS7.5, but Dynamic HttpCompression is not enable by default (although it's available). I can do manually by turn on: Start/ControlPanel/ProgramsAndFeatures/TurnWindowsFeatures on or Off/IIS/WWW Service/Performance features/Dynamic Content Compression, but How can I programmatically turn it on that Windows Feature? I can use PowerShell, C# in my installer. Any idea how I might be able to do this? Thanks.

    Read the article

  • How can a Delphi TForm / TPersistent object calculate its own deserialization time?

    - by mjustin
    For performance tests I need a way to measure the time needed for a form to load its definition from the DFM. All existing forms inherit a custom form class. To capture the current time, this base class needs overriden methods as "extension points": start of the deserialization process after the deserialization (can be implemented by overriding the Loaded procedure) the moment just before the execution of the OnFormCreate event So the log for TMyForm.Create(nil) could look like: - 00.000 instance created - 00.010 before deserialization - 01.823 after deserialization - 02.340 before OnFormCreate Which TObject (or TComponent) methods are best suited? Maybe there are other extension points in the form creation process, please feel free to make suggestions.

    Read the article

  • calculating the potential effect of inaccurate triangle vertex positions on the triangle edge lenght

    - by stingrey
    i'm not sure how to solve the following problem: i have a triangle with each of the three known vertex positions A,B,C being inaccurate, meaning they can each deviate up to certain known radii rA, rB, rC into arbitrary directions. given such a triangle, i want to calculate how much the difference of two specific edge lengths (for instance the difference between lengths of edge a and edge b) of the triangle may change in the worst case. is there any elegant mathematical solution to this problem? the naive way i thought of is calculating all 360^3 angle combinations and measuring the edge differences for each case, which is a rather high overhead.

    Read the article

  • Cast to generic type in C#

    - by Andrej
    I have a Dictionary to map a certain type to a certain generic object for that type. For example: typeof(LoginMessage) maps to MessageProcessor<LoginMessage> Now the problem is to retrieve this generic object at runtime from the Dictionary. Or to be more specific: To cast the retrieved object to the specific generic type. I need it to work something like this: Type key = message.GetType(); MessageProcessor<key> processor = messageProcessors[key] as MessageProcessor<key>; Hope there is a easy solution to this. Edit: I do not want to use Ifs and switches. Due to performance issues I cannot use reflection of some sort either.

    Read the article

  • PHP on Windows: Apache or IIS7?

    - by josecortesp
    Here's the thing: Due to a new project, I have to learn PHP from scratch. I'm now in windows and I DON'T want to change quickly, so, don't tell me to change. I need to setup a DreamWeaverCS4/PHP5 develop environment, and, I don't have a clue whether to use Apache or IIS7. I just need some advice relating to it. Remarks: I really don't care about performance(yet), neither portability, and so on. I just need to set it up quickly and easily. Thanks in advance

    Read the article

  • Add comma-separated value of grouped rows to existing query

    - by Peter Lang
    I've got a view for reports, that looks something like this: SELECT a.id, a.value1, a.value2, b.value1, /* (+50 more such columns)*/ FROM a JOIN b ON (b.id = a.b_id) JOIN c ON (c.id = b.c_id) LEFT JOIN d ON (d.id = b.d_id) LEFT JOIN e ON (e.id = d.e_id) /* (+10 more inner/left joins) */ It joins quite a few tables and returns lots of columns, but indexes are in place and performance is fine. Now I want to add another column to the result, showing comma-separated values ordered by value from table y outer joined via intersection table x if a.value3 IS NULL, else take a.value3 To comma-separate the grouped values I use Tom Kyte's stragg, could use COLLECT later. Pseudo-code for the SELECT would look like that: SELECT xx.id, COALESCE( a.value3, stragg( xx.val ) ) value3 FROM ( SELECT x.id, y.val FROM x WHERE x.a_id = a.id JOIN y ON ( y.id = x.y_id ) ORDER BY y.val ASC ) xx GROUP BY xx.id What is the best way to do it? Any tips?

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >