Search Results

Search found 23103 results on 925 pages for 'performance issues and ha'.

Page 323/925 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • Strange CSS Positioning issue with a fader on a responsive site

    - by EPICWebDesign
    I'm developing a prototype responsive wordpress theme version of my homepage. I'm running into issues with the fader on the homepage. http://dev.epicwebdesign.ca/epicblog/ When the image switches, it uses position:absolute to overlay the pictures, then goes back to position:relative. I need to make the absolute be relative to the bottom left corner of the menu instead of the top left corner of #wrap so it doesn't overlap. I tried putting it in a container div like this: http://wiki.orbeon.com/forms/doc/contributor-guide/browser#TOC-Absolutely-positioned-box-inside-a-box-with-overflow:-auto-or-hidden but that doesn't seem to work. Any ideas? There are major CSS compatibility issues in IE, try it in chrome. It should look similar to http://epicwebdesign.ca. When the browser window is shrunk horizontially, the whole theme compensates.

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • Quicktime Audio Extraction for Compressed Movies

    - by Noorul
    Hi all, I am trying to extract audio from Quicktime Movie. I followed the steps in http://developer.apple.com/quicktime/audioextraction.html. It works fine.But When i try to extract any movies which is compressed(Audio is compressed as AAC), it gives a first chance exception.. In callstack it shows CoreAudioToolbox.dll... If is do continue it renders the audio without any issues. In Mac, this works without any issues. Is this really anything to worry about... I am a QT Beginner. Please help me My QT version is 7.6.7(1675) I am using Windows7

    Read the article

  • Why do people still use C these days? [closed]

    - by Joshua
    C++ is clearly a far superior language than C, since it has many features that C lacks (although, C++'s object model isn't as ideal as say C#'s). With the coming off the new C++0x standard, why hasn't C been phased out to obscurity? C++ has been around for so long, since the '80s. The Linux kernel has already been ported to C++ with negligible performance differences. I believe, with no evidence, that larger program structures benefit in performance if written in C++ than in C, if only because of object interaction. Don't get me started on "objects-in-C!" libraries, which are all a terrible hack. (Not that C++'s object model is the most ideal, but it is almost up to snuff with C# using common ad-hoc techniques.)

    Read the article

  • which is better, creating a materialized view or a new table?

    - by Carson
    I have some demanding mysql queries that are needed to grap same up-to-date datasets from 5-7 mysql tables. I am thinking of creating a table or materialized view to gather all demanding columns from other tables, so as to increase performance. If I create that table, I may need to do extra insert / update / delete operation each time other tables updated. if I create materialized view, I am worrying if the performance can be greatly improved. Because data from other tables are changing very frequently. Most likely, the view may need to be created first everytime before selecting it. Any ideas? e.g. how to cache? other extra measures I can do?

    Read the article

  • How to set a bean property before executing this f:event listener

    - by user
    How to set a bean property from jsf page before executing this f:event listener: <f:event type="preRenderComponent" listener="bean.method}"/> I tried the below code but it does not set the value to the bean property. <f:event type="preRenderComponent" listener="bean.method}"> <f:setPropertyActionListener target="#{bean.howMany}" value="2"/> </f:event> JSF2.1.6 with PF 3.3 EDIT Any issues with this below code? (This works! but I just want to confirm if there are any issues with this!?) <f:event type="preRenderComponent" listener="#{bean.setHowMany(15)}"/> <f:event type="preRenderComponent" listener="#{bean.method}"/>

    Read the article

  • OpenLayers FramedCloud Autosizing

    - by Kyle
    According to the documentation, I should be able to configure the size of my OpenLayers popup by declaring an OpenLayers.Size object in the FramedCloud constructor: this.popup = new OpenLayers.Popup.FramedCloud('featurePopup', this.options.feature.geometry.getBounds().getCenterLonLat(), new OpenLayers.Size(80, 60), this.template(this.formattedAttrs()), null, true, this.onPopupClose ); Currently the popup that is rendered is autosized no matter what dimensions I use in the constructor. I've tried manually setting the autosizing attribute of the FramedCloud to false as well as manually adjusting the css styling for the popup without achieving the results I need. I checked and found some similar issues in the OpenLayers 2.11 issues list, but I haven't found a workaround. Any ideas?

    Read the article

  • Is a program compiled with -g gcc flag slower than the same program compiled without -g?

    - by e271p314
    I'm compiling a program with -O3 for performance and -g for debug symbols (in case of crash I can use the core dump). One thing bothers me a lot, does the -g option results in a performance penalty? When I look on the output of the compilation with and without -g, I see that the output without -g is 80% smaller than the output of the compilation with -g. If the extra space goes for the debug symbols, I don't care about it (I guess) since this part is not used during runtime. But if for each instruction in the compilation output without -g I need to do 4 more instructions in the compilation output with -g than I certainly prefer to stop using -g option even at the cost of not being able to process core dumps. How to know the size of the debug symbols section inside the program and in general does compilation with -g creates a program which runs slower than the same code compiled without -g?

    Read the article

  • Is there a way of combining Ogre3d's Hikari and BlazeDS?

    - by zooropa
    I am trying to load a swf into an Ogre3d/Hikari application and that swf uses BlazeDS to communicate with a Tomcat server. The swf is from another project that is used in a browser environment. So, the swf is served from Tomcat to the client's browswer. It seems like there are two main issues. The first is a sandbox issue with using the swf in a different sandbox type than a browser environment. The other issue seems to be setting up the BlazeDS message broker and the other communication channels and endpoints. As far as both of the issues are concerned, it seems like the Hikari application is similar to an Adobe Air application and can use similar approaches. Does this seem like a reasonable approach? Does anybody have any similar experiences? If so, how were things configured?

    Read the article

  • What Use are Threads Outside of Parallel Problems on MultiCore Systesm?

    - by Robert S. Barnes
    Threads make the design, implementation and debugging of a program significantly more difficult. Yet many people seem to think that every task in a program that can be threaded should be threaded, even on a single core system. I can understand threading something like an MPEG2 decoder that's going to run on a multicore cpu ( which I've done ), but what can justify the significant development costs threading entails when you're talking about a single core system or even a multicore system if your task doesn't gain significant performance from a parallel implementation? Or more succinctly, what kinds of non-performance related problems justify threading? Edit Well I just ran across one instance that's not CPU limited but threads make a big difference: TCP, HTTP and the Multi-Threading Sweet Spot Multiple threads are pretty useful when trying to max out your bandwidth to another peer over a high latency network connection. Non-blocking I/O would use significantly less local CPU resources, but would be much more difficult to design and implement.

    Read the article

  • Eval IronPython Scripts during ASP.NET Web Request; Static Engine or Not

    - by Josh Pearce
    I would like to create an ASP.NET MVC web application which has extensible logic that does not require a re-build. I was thinking of creating a filter which had an instance of the IronPython engine. What I would like to know is: how much overhead is there in creating a new engine during each web request, and would it be a better idea to keep a static engine around? However, if I were to keep a single static engine around, what are the issues I might run into as far as locking and script scope? Is it possible to have multiple scopes in the same IropPython engine so I don't get variable collision and security issues between web requests?

    Read the article

  • jKey (JavaScript key shortcut plugin) Issue

    - by Oscar Godson
    Me and a friend are writing a plugin for jQuery that makes it easy for devs to add key shortcuts and we're damn close but no cigar. We're having issues with the key combos. It seems like we are having issues when you call the same selector multiple times on a page. Try pressing alt+a... youll see it works one time, then gets all mangled up. Anyone know how to fix it? It'll be on github after it's corrected and I'd be happy to add "thank you to" link to whoever can fix this in the header with the copyright info :) It's nicely documented and i have all the code and stuff here. So... anyone? http://jsbin.com/azaha4

    Read the article

  • Single page app with high number of images working extremely slow on iOS8 safari/Webview

    - by NikhilWanpal
    We are working on a WebView (not WKWebView, yet) app, are are observing that the app runs extremely slow on iOS 8. The same app runs smooth on lower versions of OS like iOS7 and iOS6. So we tried it in safari on iOS8 and the performance is similar to iOS6 and 7. The app is filled with images and many are high resolution. While trying to trace the issue (trial and error!) we reduced the sizes and resolutions of the images and the performance improved, but it is still not at par with versions 6 and 7. We are unable to find any such issues reported elsewhere and are stuck. It would be great if we could get some pointers on this one.

    Read the article

  • how to get apache log like debugging in Android logcat

    - by Nav
    I am currently using a webview to load a webpage which contains lots of javascript and having lots of trouble debugging what exactly gets loaded and when in the webview . Then I saw this post where the op seems to be using apache log to monitor webpage load events in his webview. Enhance webView performance (should be the same performance as native Web Browser) Can I get a similar utility plugin or anything so that I can use it with logcat in ddms view. If possible please provide some resource as to how to configure it for android.

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • Benchmark of Java Try/Catch Block

    - by hectorg87
    I know that going into a catch block has some significance cost when executing a program, however, I was wondering if entering a try{} block also had any impact so I started looking for an answer in google with many opinions, but no benchmarking at all. Some answers I found were: Java try/catch performance, is it recommended to keep what is inside the try clause to a minimum? Try Catch Performance Java Java try catch blocks However they didn't answer my question with facts, so I decided to try it for myself. Here's what I did. I have a csv file with this format: host;ip;number;date;status;email;uid;name;lastname;promo_code; where everything after status is optional and will not even have the corresponding ; , so when parsing a validation has to be done to see if the value is there, here's where the try/catch issue came to my mind. The current code that in inherited in my company does this: StringTokenizer st=new StringTokenizer(line,";"); String host = st.nextToken(); String ip = st.nextToken(); String number = st.nextToken(); String date = st.nextToken(); String status = st.nextToken(); String email = ""; try{ email = st.nextToken(); }catch(NoSuchElementException e){ email = ""; } and it repeats what it's done for email with uid, name, lastname and promo_code. and I changed everything to: if(st.hasMoreTokens()){ email = st.nextToken(); } and in fact it performs faster. When parsing a file that doesn't have the optional columns. Here are the average times: --- Trying:122 milliseconds --- Checking:33 milliseconds however, here's what confused me and the reason I'm asking: When running the example with values for the optional columns in all 8000 lines of the CSV, the if() version still performs better than the try/catch version, so my question is Does really the try block does not have any performance impact on my code? The average times for this example are: --- Trying:105 milliseconds --- Checking:43 milliseconds Can somebody explain what's going on here? Thanks a lot

    Read the article

  • How does one validate html that's generated from JS running in the browser?

    - by Henry Rose
    The page in question has very skeletal html sent over the wire to facilitate the building of a complicated UI in javascript. I'm now encountering a strange browser compatibility issue that feels very much like I've got a markup problem somewhere on the page. I've validated the page as it comes across the wire using the W3C tool and ensured there are no issues in that html. I've also tried validating the output of running on the browser console: document.getElementsByTagName('html')[0].outerHTML I find that the output of the above introduces lots of new issues, such as removing the trailing '/' in self closing tags. This added noise is distracting, but it also makes me uneasy about validating this method. How do you validate markup that's rendered client side?

    Read the article

  • Thoughts on streamlining multiple .Net apps

    - by John Virgolino
    We have a series of ASP.Net applications that have been written over the course of 8 years. Mostly in the first 3-4 years. They have been running quite well with little maintenance, but new functionality is being requested and we are running into IDE and platform issues. The apps were written in .Net 1.x and 2.x and run in separate spaces but are presented as a single suite of applications which use a common navigation toolbar (implemented as a user control). Every time we want to add something to a menu in the nav we have to modify it in all the apps which is a pain. Also, the various versions of Crystal reports and that we used tables to organize the visual elements and we end up with a mess, especially with all the multi-platform .Net versions running. We need to streamline the suite of apps and make it easier to add on new apps without a hassle. We also need to bring all these apps under one .Net platform and IDE. In addition, there is a WordPress blog styled to match the style of the application suite "integrated" into the UI and a link to a MediaWiki Wiki application as well. My current thinking is to use an open source content management system (CMS) like Joomla (PHP based unfortunately, but it works well) as the user interface framework for style templating and menu management. Joomla's article management would allow us to migrate the Wiki content into articles which could be published without interfering with the .Net apps. Then essentially use an IFrame within an "article" to "host" the .Net application, then... Upgrade the .Net apps to VS2010, strip out all the common header/footer controls and migrate the styles to use the style sheets used in the CMS. As I write this, I certainly realize this is a lot of work and there are optimization issues which this may cause as well as using IFrames seems a bit like cheating and I've read about issues with IFrames. I know that we could use .Net application styling, but it seems like a lot more work (not sure really). Also, the use of a CMS to handle the blog and wiki also seems appealing, unless there is a .Net CMS out there that can handle all of these requirements. Given this information, I am looking to know if I am totally going in the wrong direction? We tried to use open source and integrate it over time, but not this has become hard to maintain. Am I not aware of some technology out there that will meet our requirements? Did we do this right and should we just focus on getting the .Net streamlined? I understand that no matter what we do, it's going to be a lot of work. The communities considerable experience would be helpful. Thanks!! PS - A complete rewrite is not an option.

    Read the article

  • Indexed key vs indexed separate columns, which one is faster ?

    - by Jerry
    In MYSQL, from a pure performance perspective, if I have a table with large amount of data with 10/1 read/write ratio. is it faster in read/write performance to have 4 search criteria in separate columns and all indexed or have them combined in to one single string acting as a key and store in one indexed column ? e.g. say this table with 5 columns, first name, last name, sex, country and file where the first four columns will ALWAYS be given as a part of search parameters in a search or have a table with two columns, key and file. where the value of key can be john-smith-male-australia ?? I don't quite get the pros and cons. the point I try to stress is the fact that all parameters will be given.in a search.

    Read the article

  • Data Warehouse: One Database or many?

    - by drrollins
    At my new company, they keep all data associated with the data warehouse, including import, staging, audit, dimension and fact tables, together in the same physical database. I've been a database developer for a number of years now and this consolidation of function and form seems counter to everything I know. It seems to make security, backup/restore and performance management issues more manually intensive. Is this something that is done in the industry? Are there substantial reasons for doing or not doing it? The platform is Netezza. The size is in terabytes, hundreds of millions of rows. What I'm looking to get from answers to this question is a solid understanding of how right or wrong this path is. From your experience, what are the issues I should be focused on arguing if this is a path that will cause trouble for us down the road. If it is no big deal, then I'd like to know that as well.

    Read the article

  • Archiving Database Tables using Java

    - by HonorGod
    My application demands archiving database tables between sybase and db2 and vice-a-versa and within(db2 to db2 and sybase to sybase) using java. I am trying to understand the best strategies around in terms performance, implementation, ease of use and scalability. Here is my current process - source and destination tables with the acceptable parameters (from java) are defined within xml. the application reads the source and destination configurations and execute them sequentially. destination is sometime optional when source is just deleting data from a specific table or when the source is just calling a stored procedure. dataset between source and destination is extremely large (in millions) From top of my head, it looks like I can define dependencies between multiple source and destination combination and have them execute in parallel in multiple treads. But will this improve any performance(i hope it will)? Are there any open-source frameworks for data archiving using java? Any other thoughts on the implements side will be really helpful. Thanks

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >