Search Results

Search found 14924 results on 597 pages for 'selector performance'.

Page 437/597 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • Storing Tiled Level Data in J2ME game

    - by Alex
    I'm developing a J2ME game which uses tiled backgrounds for the levels. My question is how do I store this tile information in my game. At the moment it is stored as an array; with each number representing a different tile from the tile-sheet. This works well enough, however I don't like the fact that it is 'hard-coded' into the game because (at least in my opinion) it is harder to edit the levels, or design new ones. I was also thinking that it would be difficult if you wanted to add a 'level pack', I'm not sure on how this would be achieved though; it's not something I was planning on doing, I'm just curious. I was wondering if there was a way I could store level data in some external file and then load this in to the game. The problem is I don't know what the limitations are for J2ME regarding file I/O, can it read in any file like Java? I am aware of the RMS, but from my experience I don't think this would work (unless I am mistaken). Also, would loading the data in this way be too big a performance hit? Or is there another way I can achieve what I am trying to do. As I said, the way I have it at the moment works fine, and if this is the only viable option then it will suffice.

    Read the article

  • Oracle Loader for Hadoop 1.1.0.0.3

    - by mannamal
    We are pleased to announce availability of Oracle Loader for Hadoop 1.1.0.0.3, containing bug fixes and performance improvements to Oracle Loader for Hadoop. The updated product can be downloaded from here: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/big-data-downloads-1451048.html Note that the Oracle Loader for Hadoop 1.1.0.0.3 kit is a complete kit containing the product and bug fixes. Fixes of the earlier version 1.1 patch releases are also included. Upgrading to Oracle Loader for Hadoop 1.1.0.0.3 (from versions 1.1.x): On the Oracle Big Data Appliance:  1. 1.  Upload the new oraloader rpm to the first Oracle Big Data Appliance server.  For example:   /tmp/oraloader-1.1.0.0.3-1.x86_64.rpm 2.     As the root user, use dcli from the first Oracle Big Data Appliance server to copy the new rpm to all nodes. For example:   #dcli -f /tmp/oraloader-1.1.0.0.3-1.x86_64.rpm  -d /tmp/oraloader-replace.rpm 3. 3.  As the root user, use dcli from the first Oracle Big Data Appliance server to replace the old oraloader rpm with the new one.  For example:  #dcli "rpm -e oraloader ; rpm -Uvh /tmp/oraloader-replace.rpm" On other hardware: 1. 1.  Unzip oraloader-1.1.0.0.3.x86_64.zip at <location of install> 2. 2.  Update OLH_HOME to point to <location of install>/oraloader-1.1.0.0.3 

    Read the article

  • Storing editable site content?

    - by hmp
    We have a Django-based website for which we wanted to make some of the content (text, and business logic such as pricing plans) easily editable in-house, and so we decided to store it outside the codebase. Usually the reason is one of the following: It's something that non-technical people want to edit. One example is copywriting for a website - the programmers prepare a template with text that defaults to "Lorem ipsum...", and the real content is inserted later to the database. It's something that we want to be able to change quickly, without the need to deploy new code (which we currently do twice a week). An example would be features currently available to the customers at different tiers of pricing. Instead of hardcoding these, we read them from database. The described solution is flexible but there are some reasons why I don't like it. Because the content has to be read from the database, there is a performance overhead. We mitigate that by using a caching scheme, but this also adds some complexity to the system. Developers who run the code locally see the system in a significantly different state compared to how it runs on production. Automated tests also exercise the system in a different state. Situations like testing new features on a staging server also get trickier - if the staging server doesn't have a recent copy of the database, it can be unexpectedly different from production. We could mitigate that by committing the new state to the repository occasionally (e.g. by adding data migrations), but it seems like a wrong approach. Is it? Any ideas how best to solve these problems? Is there a better approach for handling the content that I'm overlooking?

    Read the article

  • Are there design patterns or generalised approaches for particle simulations?

    - by romeovs
    I'm working on a project (for college) in C++. The goal is to write a program that can more or less simulate a beam of particles flying trough the LHC synchrotron. Not wanting to rush into things, me and my team are thinking about how to implement this and I was wondering if there are general design patterns that are used to solve this kind of problem. The general approach we came up with so far is the following: there is a World that holds all objects you can add objects to this world such as Particle, Dipole and Quadrupole time is cut up into discrete steps, and at each point in time, for each Particle the magnetic and electric forces that each object in the World generates are calculated and summed up (luckily electro-magnetism is linear). each Particle moves accordingly (using a simple estimation approach to solve the differential movement equations) save the Particle positions repeat This seems a good approach but, for instance, it is hard to take into account symmetries that might be present (such as the magnetic field of each Quadrupole) and is this thus suboptimal. To take into account such symmetries as that of the Quadrupole field, it would be much easier to (also) make space discrete and somehow store form of the Quadrupole field somewhere. (Since 2532 or so Quadrupoles are stored this should lead to a massive gain of performance, not having to recalculate each Quadrupole field) So, are there any design patterns? Is the World-approach feasible or is it old-fashioned, bad programming? What about symmetry, how is that generally taken into acount?

    Read the article

  • Run a VirtualBox VM in a second X-Server with Graphic support

    - by Scindix
    I'm starting a VirtualBox VM (Windows 7) in a second X-Server (Ubuntu 14.04) and i'm using the following xinit script (/path/.vboximage): optirun VBoxManage startvm <vm name> & exec tinywm I recognized that while running Virtualbox normally under Gnome (Unity to be precise ;-) ) I get full graphics support. But when I run it on a second xserver there seem to be some problems. E.g. Windows Aero doesn't seem to work and Chrome WebGL demos are running with poorer performance. I'm not a big Windows expert, so I don't know how I could check the used Graphic card (specification). But it is very obvious that something has changed when running the vm in the extra X-Server. Also when I try to replace tinywm with compiz I get the unity frame around the VM, which also seems to have no graphics acceleration (no transparency effects) So it seems that the X-Server doesn't have Graphic acceleration at all. I have a NVidia 525m and an Intel HD3000 which are both capable of advanced graphics. I'm starting the above script with startx /path/.vboximage - :1 How could I fix that?

    Read the article

  • StreamInsight V2.0 Released!

    - by Roman Schindlauer
    The StreamInsight Team is proud to announce the release of StreamInsight V2.0! This is the version that ships with SQL 2012, and as such it has been available through Connect to SQL CTP customers already since December. As part of the SQL 2012 launch activities, we are now making V2.0 available to everyone, following our tradition of providing a separate download page. StreamInsight V2.0 includes a number of stability and performance fixes over its predecessor V1.2. Moreover it introduces a dependency on the .NET Framework 4.0, as well as on SQL 2012 license keys. For these reasons, we decided to bump the major version number, even though V2.0 does not add new features or API surface. It can be regarded a stepping stone to the upcoming release 2.1 which will contain significantly new APIs (that will depend on .NET 4.0). Head over here to download StreamInsight V2.0. The updated Books Online can be found here. Update: For instructions on how to make your existing application work against the new bits without recompilation, see here. Regards, The StreamInsight Team

    Read the article

  • Series On Embedded Development (Part 3) - Runtime Optionality

    - by Darryl Mocek
    What is runtime optionality? Runtime optionality means writing and packaging your code in such a way that all of the features are available at runtime, but aren't loaded and used if the feature isn't used. The code is separate, and you can even remove the code to save persistent storage if you know the feature will not be used. In native programming terms, it's splitting your application into separate shared libraries so you only have to load what you're using, which means it only impacts volatile memory when enabled at runtime. All the functionality is there, but if it's not used at runtime, it's not loaded. A good example of this in Java is JVMTI, Java's Virtual Machine Tool Interface. On smaller, embedded platforms, these libraries may not be there. If the libraries are not there, there's no effect on the runtime as long as you don't try to use the JVMTI features. There is a trade-off between size/performance and flexibility here. Putting code in separate libraries means loading that code will take longer and it will typically take up more persistent space. However, if the code is rarely used, you can save volatile memory by including it in a separate library. You can also use this method in Java by putting rarely-used code into one or more separate JAR's. Loading a JAR and parsing it takes CPU cycles and volatile memory. Putting all of your application's code into a single JAR means more processing for that JAR. Consider putting rarely-used code in a separate library/JAR.

    Read the article

  • Algorithms for Data Redundancy and Failover for distributed storage system?

    - by kennetham
    I'm building a distributed storage system that works with different storage sizes. For instance, my storage devices have sizes of 50GB, 70GB, 150GB, 250GB, 1000GB, 5 storage systems in one system. My application will store any files to the storage system. Question: How can I build a distributed storage with the idea of data redundancy and fail-over to store documents, videos, any type of files at the same time ensuring that should one of any storage devices fail, there would be another copy of these files on another storage device. However, the concern is, 50GB of storage can only store this maximum number of files as compared to 70GB, 150GB etc. With one storage in mind, bringing 5 storage systems like a cloud storage, is there any logical way to distribute or store the files through my application? How do I ensure data redundancy through different storage sizes? Is there any algorithm to collate multiple blob files into a single file archive? What is the best solution for one cloud storage with multiple different storage sizes? I open this topic with the objective of discussing the best way to implement this idea, assuming simplicity, what are the issues of this implementation, performance measurements and discussion of the limitations.

    Read the article

  • Welcome!

    - by mannamal
    Welcome to the Oracle Big Data Connectors blog, which will focus on posts related to integrating data on a Hadoop cluster with Oracle Database. In particular the blog will focus on best practices, usage notes, and performance tips for using Oracle Loader for Hadoop and Oracle Direct Connector for HDFS, which are part of Oracle Big Data Connectors. Oracle Big Data Connectors 1.0 also includes Oracle R Connector for Hadoop and Oracle Data Integrator Application Adapters for Hadoop. Oracle Loader for Hadoop: Oracle Loader for Hadoop loads data from Hadoop to Oracle Database. It runs as a MapReduce job on Hadoop to partition, sort, and convert the data into an Oracle-ready format, offloading to Hadoop the processing that is typically done using database CPUs. The data is thenloaded to the database by the Oracle Loader for Hadoop job (online load) or written out as Oracle Data Pump files for load and access later (offline load) with Oracle Direct Connector for HDFS. Oracle Direct Connector for HDFS: Oracle Direct Connector for HDFS is a connector for high speed access of data on HDFS from Oracle Database. With this connector Oracle SQL can be used to directly query data on HDFS. The data can be Oracle Data Pump files generated by Oracle Loader for Hadoop or delimited text files. The connector can also be used to load data into the database using SQL.

    Read the article

  • More than one way to skin an Audit

    - by BuckWoody
    I get asked quite a bit about auditing in SQL Server. By "audit", people mean everything from tracking logins to finding out exactly who ran a particular SELECT statement. In the really early versions of SQL Server, we didn't have a great story for very granular audits, so lots of workarounds were suggested. As time progressed, more and more audit capabilities were added to the product, and in typical database platform fashion, as we added a feature we didn't often take the others away. So now, instead of not having an option to audit actions by users, you might face the opposite problem - too many ways to audit! You can read more about the options you have for tracking users here: http://msdn.microsoft.com/en-us/library/cc280526(v=SQL.100).aspx  In SQL Server 2008, we introduced SQL Server Audit, which uses Extended Events to really get a simple way to implement high-level or granular auditing. You can read more about that here: http://msdn.microsoft.com/en-us/library/dd392015.aspx  As with any feature, you should understand what your needs are first. Auditing isn't "free" in the performance sense, so you need to make sure you're only auditing what you need to. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • creative & complex vs simple and readable

    - by Shirish11
    Which is a better option? Its not always that when you have something creative your code is going to look ugly. But at times it does go a bit ugly. e.g. if ( (object1(0)==object2(0) && (object1(1)==object2(1) && (object1(2)==object2(2) && (object1(3)==object2(3)){ retval = true; else retval = false; is simple and readable bool retValue = (object1(0)==object2(0)) && (object1(1)==object2(1)) && (object1(2)==object2(2)) && (object1(3)==object2(3)); but having something like this will make some newbies scratch their heads. So what do I go for? including simple code everywhere might sometime hamper my performance. what I could think of was commenting wherever necessary but at times u get too curious to know what is actually happening. Any suggestions are welcome.

    Read the article

  • Cheap server stress testing

    - by acrosman
    The IT department of the nonprofit organization I work for recently got a new virtual server running CentOS (with Apache and PHP 5), which is supposed to host our website. During the process of setting up the server I discovered that the slightest use of the new machine caused major performance problems (I couldn't extract tarballs without bringing it to a halt). After several weeks of casting about in the dark by tech support, it now appears to be working fine, but I'm still nervous about moving the main site there. I have no budget to work with (so no software or services that require money), although due to recent cut backs I have several older desktops that I could use if it helps. The site doesn't need to withstand massive amounts of traffic (it's a Drupal site just a few thousand visitors a day), but I would like to put it through a bit of it paces before moving the main site over. What are cheap tools that I can use to get a sense if the server can withstand even low levels of traffic? I'm not looking to test the site itself yet, just fundamental operation of the server.

    Read the article

  • Should I encrypt data in database?

    - by Tio
    I have a client, for which I'm going to do an Web application about patient care, managing patients, consults, history, calendars, everything about that basically. The problem is that this is sensitive data, patient history and such. The client insists on encrypting the data at the database level, but I think this is going to deteriorate the performance of the web app. ( But maybe I shouldn't be worried about this ) I've read the laws about data protection on health issues ( Portugal ), but isn't very specific about this ( I just questioned them about this, I'm waiting for their response ). I've read the following link, but my question is different, should I encrypt the data in the database, or not. One problem that I foresee in encrypting data, is that I'm going to need a key, this could be the user password, but we all know how user passwords are ( 12345 etc etc ), and generating a key I would have to store it somewhere, this means that the programmer, dba, whatever could have access to it, any thoughts on this? Even adding an random salt to the user password isn't going to solve the problem since I can always access it, and therefore decrypt the data.

    Read the article

  • Rendering multiple squares fast?

    - by Sam
    so I'm doing my first steps with openGL development on android and I'm kinda stuck at some serious performance issues... What I'm trying to do is render a whole grid of single colored squares on to the screen and I'm getting framerates of ~7FPS. The squares are 9px in size right now with one pixel border in between, so I get a few thousand of them. I have a class "Square" and the Renderer iterates over all Squares every frame and calls the draw() method of each (just the iteration is fast enough, with no openGL code the whole thing runs smootlhy at 60FPS). Right now the draw() method looks like this: // Prepare the square coordinate data GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, vertexStride, vertexBuffer); // Set color for drawing the square GLES20.glUniform4fv(mColorHandle, 1, color, 0); // Draw the square GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer); So its actually only 3 openGL calls. Everything else (loading shaders, filling buffers, getting appropriate handles, etc.) is done in the Constructor and things like the Program and the handles are also static attributes. What am I missing here, why is it rendering so slow? I've also tried loading the buffer data into VBOs, but this is actually slower... Maybe I did something wrong though. Any help greatly appreciated! :)

    Read the article

  • For a large website developed in PHP, is it necessary to have a framework?

    - by Martin
    I am wondering if it is necessary to have a framework or if it is a must-have if I plan to make a large website. Large website could mean a lot of things: in other words, multiple dynamic web pages (40-50 dynamic pages, mysql content) and a lot of visitors (+- a million hits per month). The site will be hosted in a dedicated server environment. I know that it could simplify coding for a developer team, that it includes libraries and a lot of advantages. But I just feel that I don't need that. I think that learning how it works, managing it and installing it would take more time and I could use that time to code. I write PHP the simplest way I could (with performance in mind) and I try to reuse my code/functions/classes most of the time and I make sure that if another developer joins the team, that he won't be lost in the code. I am also planning to use MemCached or another Cache for PHP. As I said, the site will be hosted in a dedicated server environment but will be entirely managed by the hosting company. I am pretty sure the control panel for me to control the basic stuff will be Cpanel. For a developer like me that only knows PHP, Javascript, HTML, CSS, MYSQL and really basic server management, I feel that it seems to complicated to have a framework. Am I wrong? Is it worth the time to learn all about it? Thank you for your opinions and suggestions.

    Read the article

  • Oracle Day 2012

    - by Mark Hesse
    Normal.dotm 0 0 1 133 760 Sun Microsystems 6 1 933 12.0 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} As a keynote speaker at this year’s Oracle Day 2012, “Your Vision, Engineered” I had the honor and pleasure of speaking to a crowd of about 150 attendees about our recently released, fourth generation Exadata X3 In-Memory Machine in a presentation entitled “Oracle Exadata X3 - Transforming Data Management”. The general theme of the thirty-minute talk was how to improve performance, lower costs, and build the foundation for your cloud service platform using Exadata. Since its introduction in 2008, I’ve watched first-hand as Exadata has evolved from a data warehouse-only system to an OLTP and DW in-memory database machine capable of storing hundreds of terabytes of compressed user data in flash and main memory.  Many of my Exadata customers are now purchasing additional systems as they continue to standardize Oracle 11g deployments on the best database platform available.

    Read the article

  • How can state changes be batched while adhering to opaque-front-to-back/alpha-blended-back-to-front?

    - by Sion Sheevok
    This is a question I've never been able to find the answer to. Batching objects with similar states is a major performance gain when rendering many objects. However, I've been learned various rules when drawing objects in the game world. Draw all opaque objects, front-to-back. Draw all alpha-blended objects, back-to-front. Some of the major parameters to batch by, as I understand it, are textures, vertex buffers, and index buffers. It seems that, as long as you are adhering to the above two rules, there's little to be done in regards to batching. I see one possibility to batch, while still adhering to the above two rules. Opaque objects can still be drawn out of depth-order, because drawing them front-to-back is merely a fillrate optimization, meanwhile state changes may very well be far more expensive than the overdraw of drawing out of depth-order. However, non-opaque objects, those that require alpha-blending at least, must be drawn back-to-front in order to avoid rendering artifacts. Is the loss of the fillrate optimization for opaques worth the state batching optimization?

    Read the article

  • Choosing the Database Solution for Large Data Application

    - by GµårÐïåñ
    I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So that leaves me with MS_SQL, SQLite and mySQL (although if anyone has other options they think would be good as well, please feel free to share them, by no means am I set on these only). So this brings me to what you guys think is the best option for what I have described. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. Your feedback would be greatly appreciated.

    Read the article

  • How to identify web development benchmarking questions? [closed]

    - by GenericJam
    I am in my final year of college and I have to put forward some sort of thesis for my final year project. The project is a web based attendance system that I am building for the college. I have it about 70% complete in Java. After completing it in Java, the plan is for me to rewrite the server bit in Erlang and then release the bitter rivals in a head to head cage match. The idea being that there is some sort of grounds for comparison. There are a few hurdles along the way, such as me learning Erlang. I understand that a performance comparison like this isn't strictly scientific as there are many factors such as the programmer (myself); the hardware it runs on; etc... but it is meant to be a reasonable comparison of the merits of using Java vs. Erlang for web development. I need help in identifying what the relevant questions are that my project could address. Even though the project scope is fixed, I am trying to shoehorn in some worthwhile scientific inquiries.

    Read the article

  • Microsoft Press Deal of the Day 11/Oct/2012 - CLR via C#, 3rd Edition

    - by TATWORTH
    Today's Deal of the Day from Microsoft Press at http://shop.oreilly.com/product/9780735627048.do?code=MSDEAL is CLR via C#, 3rd EditionThe deal expires probably 23:59 PT, today 11/Oct/2012. Remember to use the code MSDEAL at checkout."Dig deep and master the intricacies of the common language runtime (CLR) and the .NET Framework 4.0. Written by a highly regarded programming expert and consultant to the Microsoft® .NET team, this guide is ideal for developers building any kind of application-including Microsoft® ASP.NET, Windows® Forms, Microsoft® SQL Server®, Web services, and console applications. You'll get hands-on instruction and extensive C# code samples to help you tackle the tough topics and develop high-performance applications." This is a very through book about Dot Net that I have completed reviewing. I commend it to all C# development teams and to individual developers with at least a year's worth of C# experience. The only drawback is that there should be a VB.NET equivalent book for the benefit of the many programming shops that have chosen VB.NET.For further details about the book see: http://oreilly.com/catalog/9780735627048The author has made some useful source available athttp://www.wintellect.com/Resources/Downloads/PushPin

    Read the article

  • How can I improve the "smoothness" of a 2D side-scrolling iPhone game?

    - by MrDatabase
    I'm working on a relatively simple 2D side-scrolling iPhone game. The controls are tilt-based. I use OpenGL ES 1.1 for the graphics. The game state is updated at a rate of 30 Hz... And the drawing is updated at a rate of 30 fps (via NSTimer). The smoothness of the drawing is ok... But not quite as smooth as a game like iFighter. What can I do to improve the smoothness of the game? Here are the potential issues I've briefly considered: I'm varying the opacity of up to 15 "small" (20x20 pixels) textures at a time... Apparently varying the opacity in this manner can degrade drawing performance I'm rendering at only 30 fps (via NSTimer)... Perhaps 2D games like iFighter are rendered at a higher frame rate? Perhaps the game state could be updated at a faster rate? Note the acceleration vales are updated at 100 Hz... So I could potentially update part of the game state at 100 hz All of my textures are PNG24... Perhaps PNG8 would help (due to smaller size etc)

    Read the article

  • "Misaligned partition" - Should I do repartition (how?)

    - by RndmUbuntuAmateur
    Tried to install Ubuntu 12.04 from USB-stick alongside the existing Win7 OS 64bit, and now I'm not sure if install was completely successful: Disk Utility tool claims that the Extended partition (which contains Ubuntu partition and Swap) is "misaligned" and recommends repartition. What should I do, and if should I do this repartition, how to do it (especially if I would like not to lose the data on Win7 partition)? Background info: A considerably new Thinkpad laptop (UEFI BIOS, if that matters). Before install there were already a "SYSTEM_DRV" partition, the main Windows partition and a Lenovo recovery partition (all NTFS). Now the table looks like this: SYSTEM_DRV (sda1), Windows (sda2), Extended (sda4) (which contains Linux (sda5; ext4) and Swap (sda6)) and Recovery (sda3). Disk Utility Tool gives a message as follows when I select Ext: "The partition is misaligned by 1024 bytes. This may result in very poor performance. Repartitioning is suggested." There were couple of problems during the install, which I describe below, in the case they happen to be relevant. Installer claimed that it recognized existing OS'es fine, so I checked the corresponding option during the install. Next, when it asked me how to allocate the disk space, the first weird thing happened: the installer give me a graphical "slide" allocate disk space for pre-existing Win7 OS and new Ubuntu... but it did not inform me which partition would be for Ubuntu and which for Windows. ..well, I decided to go with the setting installer proposed. (not sure if this is relevant, but I guess I'd better mention it anyway - the previous partition tools have been more self-explanatory...) After the install (which reported no errors), GRUB/Ubuntu refused to boot. Luckily this problem was quite straightforwardly resolved with live-Ubuntu-USB and Boot-Repair ("Recommended repair" worked just fine). After all this hassle I decided to check the partition table "just to be sure"- and the disk utility gives the warning message I described.

    Read the article

  • MySQL Workbench 5.2.39 GA Released

    - by user13164789
    The MySQL Developer Tools team is announcing the next maintenance release of its flagship product, MySQL Workbench, version 5.2.39. This version contains MySQL Utilities 1.0.5, a set of command line Python utilities for helping to perform and script various administration tasks for MySQL. A complete list of changes in this release of the Utilities can be found at:http://dev.mysql.com/doc/workbench/en/wb-utils-news-1-0-5.html MySQL Workbench 5.2 GA • Data Modeling • Query (replaces the old MySQL Query Browser) • Administration (replaces the old MySQL Administrator) Please get your copy from our Download site. Sources and binary packages are available for several platforms, including Windows, Mac OS X and Linux. http://dev.mysql.com/downloads/workbench/ Workbench Documentation can be found here. http://dev.mysql.com/doc/workbench/en/index.html Utilities Documentation can be found here.http://dev.mysql.com/doc/workbench/en/mysql-utilities.html In addition to the new Query/SQL Development and Administration modules, version 5.2 features improved stability and performance – especially in Windows, where OpenGL support has been enhanced and the UI was optimized to offer better responsiveness. This release also includes improvements to the scripting capabilities of the SQL Editor. You can read more about it in http://wb.mysql.com/workbench/doc/ For a detailed list of resolved issues, see the change log. http://dev.mysql.com/doc/workbench/en/wb-change-history.html If you need any additional info or help please get in touch with us. Post in our forums or leave comments on our blog pages. - The MySQL Workbench Team

    Read the article

  • Join me at OpenWorld 2012

    - by Michael Palmeter (Exalogic PM)
    For those of you that will be coming out to Oracle OpenWorld 2012 next ween in San Francisco, I encourage you to take a few minutes on Monday afternoon to come to my session on Oracle Exalogic. Click here for more info: CON9416 - Oracle Exalogic 2.0: Ready-to-Deploy, Mission-Critical Private Cloud My session is one of the first on Oracle Exalogic (one of the privileges of running Product Management for the product) and with that in mind it is going to be something of an introduction and overview.  The material I will present is tailored for C-level customers that are interested in the product but haven't really been exposed to it in any detail.  This is essentially the same sort of presentation I give to customers that visit Oracle HQ, and it provides context for all of the other excellent sessions that follow. During this session I will talk about: The macro-trends in the industry that are driving Exalogic strategy - IT-as-a-Service and infrastructure convergence The first two years of market success with Exalogic - who's bought it, why, and what their results have been Exalogic key features and differentiation - why it's the best possible platform for Oracle business applications and middleware How Exalogic performs, and why it is the hands-down performance champion of Enterprise cloud platforms If you haven't signed up yet, please do.  I'd love to see you there.

    Read the article

  • Usb stick too slow to benchmark?

    - by user85340
    I have a Core 2 Duo [email protected] with 3GB RAM. After some time using XUbuntu 10.10 on an 8GB stick I decided to switch to 12.04 and put it onto a 32GB stick (Transcend). I use an EXT4 with no journalling, noatime etc set. /tmp and /run is using tmpfs. And it is REALLY slow. MUCH slower than the old Xubuntu on the 8GB stick. Starting takes minutes, all applications "fade" because they respond too slow. I first thought that the NVidia graphics card is responsible for this, because there seem to be some known problems with that. Doing the adjustment (uncheck the sync checkbox) did not help. I believe the root cause is that the access to the USB stick is extremely slow. Running the read benchmark of the disk utility then brought the message "disk is too slow to benchmark"! BUT: When I do the same benchmark with the live CD I get around 20MB read performance and have a very responsive system! So how can I find out what is going one here?

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >