Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 332/677 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • TCO Comparison: Oracle Exadata vs IBM P-Series

    - by Javier Puerta
    Cost Comparison for Business Decision-makersOracle Exadata Database Machine vs. IBM Power SystemsHow to Weigh a Purchase DecisionOctober 2012 Download full report here In this research-based  white paper conducted at the request of Oracle, The FactPoint Group compares the cost of ownership of the Oracle Exadata engineered system to a traditional build-your-own (BYO) solution, in this case an IBM Power 770 (P770) with SAN storage.  The IBM P770 was chosen given it is IBM’s current most popular model, based on FactPoint primary and secondary research and IBM claims, and because at least one of the interviewed customers had specifically migrated from a P770 to Exadata, affording us a more specific data point for comparison. This research found that Oracle Exadata: Can be deployed more quickly and easily requiring 59% fewer man-hours than a traditional IBM Power Systems solution. Delivers dramatically higher performance typically up to 12X improvement, as described by customers, over their prior solution.  Requires 40% fewer systems administrator hours to maintain and operate annually, including quicker support calls because of less finger-pointing and faster service with a single vendor.  Will become even easier to operate over time as users become more proficient and organize around the benefits of integrated infrastructure. Supplies a highly available, highly scalable and robust solution that results in reserve capacity that make Exadata easier for IT to operate because IT administrators can manage proactively, not reactively.  Overall, Exadata operations and maintenance keep IT administrators from “living on the edge.”  And it’s pre-engineered for long-term growth. Finally, compared to IBM Power Systems hardware, Exadata is a bargain from a total cost of ownership perspective:  Over three years, the IBM hardware running Oracle Database cost 31% more in TCO than Exadata.

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • which server is best for me??

    - by mathew
    I am looking for a best hosting service for my website. My website is a PHP MySQl driven site which hase got site scrapping for more than 10 websites and near about 8-10 API pursing and some about 150 mb dat file reading(from local hard drive), and also one rss pursing,Live graph from other sites, geo map from Google,Map api from Google and so on. one widget which real-time result for any one who chose the same. so my question is which is best option for me. actually I am considering softlayer cloud as they are pretty cheap and more facilities than rackspace. another option is a dedicated server which has 2 single core processor, 4GB ram and 250GB sata II. with 100 mbps uplink. so please tell me which will be the best option?? I heard that dedicated server has lots of limitations than cloud. but for cloud they uses SAN for storage so I am afraid the reading proces for database may be bit slower...and their basic plan ram is only 1 GB.

    Read the article

  • (Open Source) Cloud-Filesystem to run a Database on Top?

    - by jens
    Hello, what are current "technologies and implementations" to get a filesystem with unlimited capacity by using single servers with their harddisk to form a "grid/cloud filesystem"? I need to have unlimited space (by adding further servers) but it must be a filesysem that is capable of running a database on top. I know of Apache Hadoop but that seems not be be Ideal for running a DB on top of it (or am I wrong??) And iSCSI seems to be "remote/networked" but I do not know how and if this is clusterable? thank you very much!! jens

    Read the article

  • (Open Source) Cloud-Filesystem to run a Database on Top?

    - by jens
    Hello, what are current "technologies and implementations" to get a filesystem with unlimited capacity by using single servers with their harddisk to form a "grid/cloud filesystem"? I need to have unlimited space (by adding further servers) but it must be a filesysem that is capable of running a database on top. I know of Apache Hadoop but that seems not be be Ideal for running a DB on top of it (or am I wrong??) And iSCSI seems to be "remote/networked" but I do not know how and if this is clusterable? thank you very much!! jens

    Read the article

  • Rendering a big game universe - bitmaps or vector graphics?

    - by user1641923
    I am new to an Android development, though I have much experience with Java, C++, PHP programming and a bit experience with vector graphics too (basic 3d Studio Max, Flash, etc). I am starting to work on an Android game. It is going to be a 2D space shooter/RPG, and I am not going to use any game engines and any 3D party libs. I really want to create a very large game universe, or even pseudo-infinite (without visible borders, as if it were a 2D projection of a sphere). It should include 10-12 clusters of 7-8 planets/other space objects and random amount of single asteroids/comets, which player can interact with and also not interactive background. I am looking for a least complicated aproach to create such a universe. My current ideas are: Simply create bitmaps with space scenery background so that they can be tiled seamlessly repeated and construct my 2D universe of this tiles, then place interactive objects (planets, other spaceships) on it. Using vector graphics. I would have a solid color background, some random background objects and gradients here and there. My problems here: Lack of knowledge of how well vector graphics is integrated in Android. Performance? Memory usage? Does Android manage big bitmaps well? Do all of the bitmaps have to be in memory during all game process? I am interested in technical details regarding each of the ideas and a suggestion, which I should go with.

    Read the article

  • Slick2d Spritesheet showing whole image

    - by BotskoNet
    I'm trying to show a single subimage from a sprite sheet. Using slick2d SpriteSheet class, all it's doing is showing me the entire image, but scaled down to fit the cell dimensions. The image is 96x192 and should have cells of 32x32. The code: SpriteSheet spriteSheet = new SpriteSheet("images/"+file, 32, 32 ); System.out.println("Horiz Count: " + spriteSheet.getHorizontalCount()); System.out.println("Vert Count: " + spriteSheet.getVerticalCount()); System.out.println("Height: " + spriteSheet.getHeight()); System.out.println("Width: " + spriteSheet.getWidth()); System.out.println("Texture Width: " + spriteSheet.getTextureWidth()); System.out.println("Texture Height: " + spriteSheet.getTextureHeight()); Prints: Horiz Count: 3 Vert Count: 6 Height: 192 Width: 96 Texture Width: 0.75 Texture Height: 0.75 Not sure what the texture dimensions refer to, but the rest is entirely accurate. However, when I draw the icon, the entire sprite image shows scaled down to 32x32: Image image = spriteSheet.getSprite(1, 0); // a test image.bind(); GL11.glEnable(GL11.GL_BLEND); GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA); GL11.glBegin(GL11.GL_QUADS); GL11.glTexCoord2f(0,0); GL11.glVertex2f(x,y); GL11.glTexCoord2f(1,0); GL11.glVertex2f(x+image.getWidth(),y); GL11.glTexCoord2f(1,1); GL11.glVertex2f(x+image.getWidth(),y+image.getHeight()); GL11.glTexCoord2f(0,1); GL11.glVertex2f(x,y+image.getHeight()); GL11.glEnd(); GL11.glDisable(GL11.GL_BLEND);

    Read the article

  • Give root password for maintenance

    - by Jevgeni Smirnov
    After entering shutdown now in terminal I get everything running normally and then: All processes ended withing 2 seconds...done INIT: Going single user INIT: Sending processes the TERM signal INIT: Sending processes the KILL signal Give root password for maintenance(or.... I press Ctrl+D, and it shows me login screen Debian. Shutdown through GUI works properly. UPDATE 1 It seems some process hangs. Moreover I'v managed to poweroff server through several retries. Recently i'v installed only ntp and ntpdate. Nothing more. I suppose it might be it conflicting with iptables.

    Read the article

  • can I force server to always use turboboost?

    - by javapowered
    I'm using HP DL360p Gen8 with 2 * Xeon E5-2640. I do not load CPU 100%, i load it only ~10% and so I guess turboboost is not activated. However I'm using my server for trading so I absolutely don't care about CPU loading but I always want to process my data asap. So I want server to operate using maximum 3 GHz. I.e. 90% of CPU time I don't have anything to process. 10% of CPU time I have data to process. But I need to process it ASAP. I need every single microsecond. So I want server to operate always at maximum "turboboosted" mode. Is it possible?

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • Importing Outlook 2007 rules error

    - by Alex
    I'm trying to move an Outlook 2007 account (POP3, no Exchange) to a new machine and I'm having trouble importing the rules from the old machine to the new one. Here is the deal, I imported the .pst file on the new machine but when I try to import the rules, every single one of them brakes. The folders and sub-folders hierarchy is preserved upon the import of the .pst but the rules don't point to the right folder in the respective rule. Instead it points to "the specified folder". Same OS (Windows XP), same mail client (Outlook 2007) and the .psf file is about 8 GB. Any help i greatly appreciated.

    Read the article

  • Encoding video stream for playback on a vanilla Windows XP with mencoder

    - by Tamás
    I have a bunch of PNG files, generated from a script. They represent consecutive frames of a video sequence and I'd like to encode them into a single AVI file (or some other video format) using mencoder. What parameters should I use to ensure that the video can be viewed on a vanilla Windows XP using Windows Media Player with no extra codecs installed apart from the default ones? So far I've tried -ovc lavc -lavcopts vcodec=wmv2 and -ovc lavc -lavcopts vcodec=msmpeg4 with no success. (Background story: some of the people I'm collaborating with on a scientific project cannot install any codecs on their university computers without the help of the local sysadmins, who are of course not very willing to install anything. I'd like to ensure that they can also view the video files I am creating).

    Read the article

  • Does shutdown idle VMs improve the performance?

    - by Samselvaprabu
    Often our team members are coming to me with a compliant that their VMs are slow. Our team members suggested to shutdown some of the VMs temporarily and try to access the VM. But most cases that would not help. Assume that i have assigned 4 GB for and 2 CPUs for my VM. So ideally it should not face performance issue. As our ESXi 4.1 server has multiple VM in the same server (we have overcommited memory and CPU). Does shut down other VM really helps to improve performance or not? [Note : We are using ESXi 4.1 and our hardware is R710 server. We have more number of VMs in single server so we have overcommited memory.]

    Read the article

  • How can I enable anonymous access to a Samba share under ADS security mode?

    - by hemp
    I'm trying to enable anonymous access to a single service in my Samba config. Authorized user access is working perfectly, but when I attempt a no-password connection, I get this message: Anonymous login successful Domain=[...] OS=[Unix] Server=[Samba 3.3.8-0.51.el5] tree connect failed: NT_STATUS_LOGON_FAILURE The message log shows this error: ... smbd[21262]: [2010/05/24 21:26:39, 0] smbd/service.c:make_connection_snum(1004) ... smbd[21262]: Can't become connected user! The smb.conf is configured thusly: [global] security = ads obey pam restrictions = Yes winbind enum users = Yes winbind enum groups = Yes winbind use default domain = true valid users = "@domain admins", "@domain users" guest account = nobody map to guest = Bad User [evilshare] path = /evil/share guest ok = yes read only = No browseable = No Given that I have 'map to guest = Bad User' and 'guest ok' specified, I don't understand why it is trying to "become connected user". Should it not be trying to "become guest user"?

    Read the article

  • Custom inventory items based on inheritance

    - by Bogdan Marginean
    So, here's the scenario: I'm building an RPG. Like most of the other RPGs on the market, my game will feature an inventory and of course, inventory items. So far I've worked well with using a single class for all items, because I did not need anything else than character stat alteration on item usage (consumption). However, I'd like some items to have a more exotic effect. Think of something like when the user consumes a transformation potion, he automatically turns into a beast. In order to achieve this I've thought about declaring a new class that inherits from BaseItem for each item. Each descendant would override some methods (like void OnConsume()), to change the base behavior. This works fine, but when it comes to inventory management, I have some issues. The actual inventory will have to work with BaseItem components only (for obvious reasons, as it's an enumerable collection of objects of the same type); casting any descendant to the base class is possible, so no problems in adding items to the inventory. But how can I keep track of the descendant's type (class) for each item in the inventory? And how to perform the descendant's OnConsume from withint he inventory, for each item? Let me know if you can think of a better solution than mine, or if you can think of a solution to my problem only. Development is done in C#, inside Unity 3.5. Thanks!

    Read the article

  • How do you enable multi-core virtualization in Windows 8 Pro?

    - by Greg B
    I've just got a new Dell Vostro 470 with a quad core (8 threads) i7 3770 and I'm trying to run virtual machines on it, which works fine, except if I want to assign multiple cores to a VM. I've checked the bios which states Intel Virtualization Technology [Enabled], but both Hyper-V and VirtualBox will only allow me to assign a single core. If I run the Intel Processor Identification Utility on the host OS it tells me that Intel Virtualization Technology isn't supported by the processor, but according to the Intel website, it is. So whats going on? Have Dell clipped the i7's wings? Is there some config in Windows I need to change?

    Read the article

  • Oracle releases ADF Mobile with Java ME CDC for iOS and Android

    - by hinkmond
    Finally. Oracle has released a new product that I've worked on for a while now. Oracle ADF Mobile is available for iOS and Android bringing Java ME CDC technology to iPhones and Android devices all over the world. Woot! Java. On iPhone and Android. Yeah, it's like that. See: Java and HTML5 on SmartPhones Here's a quote: Oracle announced the availability of Oracle ADF Mobile – a framework the enables the development of hybrid applications for mobile devices. Oracle ADF Mobile uses Java and HTML5 and enables developers to develop a single application that installs and runs on both iOS and Android systems. Java - Application logic is developed with the Java language. Oracle brings a lightweight Java VM embedded with each application so you can develop all your business logic in the platform neutral language you know and love! (Yes, even iOS!) Gosh, you'd think it was a big deal. Well, it was! So, go download yours today! Hinkmond

    Read the article

  • Dev Lead Job opening on my team

    My product unit (Parallel Developer Tools) is hiring a developer lead here in Redmond. This position is specifically on the debugger feature team that I "Program Manage".So, if you have what it takes and don't mind working with me every single day, click on the link below to read more and apply. You can also send me your resume and I'll make sure it gets to the right place and that you get a prompt response.There is a very long job description on the Microsoft careers site under job id 707388.Here is an excerpt from the middle (emphasis mine):"...We are in search of a talented and innovative senior lead software design engineer to own development of the debugging tools for data parallelism (including GP-GPU) and HPC Clusters being built by our team.To be successful, you need to be able to guide careers, design and architect well, communicate and share the best development practices, collaborate with your peers, contribute to the vision, and code significant portions of the solution. We want to hear from you if you're passionate about making your mark in the parallel development space, improving people, and building world-class tools."Responsibilities include:Managing a team of senior and junior developersDesign and coding high-quality software..."For the full background story, requirements, qualifications and responsibilities please visit the official page. Comments about this post welcome at the original blog.

    Read the article

  • Can't get any SMART or temperature data from HDDs

    - by Regs
    I have a PC with recently installed Gigabyte GA-X79-UD5 MB. I've encountered some weird problem with getting SMART data or temperature for HDDs. Every single tool I've tried in Windows 7 just can't get any data (HDTune, AIDA64...). I was suspecting that SMART feature is disabled in BIOS but it's seems like there is no such option in BIOS settings. I've even tried to update BIOS but still no luck. Same issue with both controllers on that MB (Intel and Marvell). It seems unlikely that both controllers end up with exact same issue. Both controllers are working in AHCI mode. Is there anythig that can interfere with getting SMART ant temp data from HDDs? Or is there any way to check that it's actuall MB issue? Is it even possible that it is hardware issue since all HDDs seems to work normal despite the fact that I can't get any temperature or SMART data from it.

    Read the article

  • multiple file systems for mysql

    - by RainDoctor
    Does mysql support multiple file systems for a single database with most of the tables being on MyISAM? Context: we have a 1.5TB mysql database, which is increasing at the rate of 200GB per month. The storage is directly attached, whose slots are almost full. I can add another DAS, and increase the file system. But resizing volume, resizing file system, etc are getting messy. Is there a concept of "tablespace, datafile" (like in oracle) in MySql world? Or how you guys manage mysql db with these kind of constraints?

    Read the article

  • Extending AutoVue Through the API

    - by GrahamOracle
    The AutoVue API (previously called the “VueBean” API) is a great way to extend AutoVue Client/Server Deployment – specifically the client component – beyond the out-of-the-box capabilities and into new use-cases. In addition to having a solid grasp of J2SE programming, make sure to leverage the following resources if you’re developing or interested in developing customizations/extensions to AutoVue Client/Server Deployment: Programmer’s Guide: Before all else, read through the AutoVue API Programmer’s Guide to get an understanding of the architecture of the API. The Programmer’s Guide is included with the installation of AutoVue, and is posted on the Oracle Technology Network (OTN) website for the recent versions of AutoVue: http://www.oracle.com/technetwork/documentation/autovue-091442.html Javadocs: The AutoVue API Javadocs document the many packages, classes, and methods available to you. The Javadocs are included in the product installation under the \docs\JavaDocs\VueBean folder (easiest starting point is through the file index.html). Integrations Forum: If you have development questions that aren’t answered through the documentation, feel free to register and post in the public AutoVue Integrations Forum. For more information refer to the following blog post from October 2010: https://blogs.oracle.com/enterprisevisualization/entry/exciting_news_autovue_integrat Code Samples: Although the Oracle Support team’s scope of Support for API/customization topics is to answer questions regarding information already provided in the documentation (i.e. not to design or develop custom solutions), there are cases where Support comes across interesting samples or code snippets that may benefit various customers. In those cases, our Support team posts the samples into the Oracle knowledge base, and tracks them through a single reference note. The link to the KM Note depends on how you currently access the My Oracle Support portal: Flash interface: https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=REFERENCE&id=1325990.1 (New) HTML interface: https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jspx?type=DOCUMENT&id=1325990.1 Happy coding!

    Read the article

  • How Can I assign an IP address to my virtual Windows Server, so that I can start using it almost as a VPS?

    - by Nelson Symonds
    We are a small office set up with two PC's out of which one of my PCs runs 24hrs. Its almost equivalent to a small server, but right now we're in need of a server which is why I am planning to keep my machine as well as a server into a single PC. I've used VMware Workstation to create a powerful Windows Server 2008 within my PC and I want to attach it to my Network Switch through the same PC where I am hosting it. I want to use it almost like a physical server with an IP address and everything so that I can connect from one Pc to the Server directly or my applications can connect to Server straight with the IP address. How should I do this? Step by step instructions would be appreciated. Thanks in Advance, Best regards Nelson

    Read the article

  • How do I remove a LOT of indexed pages from Google?

    - by Thierry
    A few weeks ago we have figured out that Google has indexed some information we would rather keep in some confidentiality, in the format of individual PDF files. Our assumption was that this was a problem with our robots.txt we had overlooked. Even though we are not sure whether or not this is the case, we are certain that the robots.txt file is in a valid format and is, according to Google's webmaster tools, blocking the files. However, even after this adjustment that has been made weeks ago, Google still has the PDF files indexed, but does tell us further information cannot be provided due to the robots.txt file being present. As you can hopefully understand, this is unwanted behaviour due to the nature of the documents. I am aware that there is a request page being provided by Google for this purpose, but there are a lot of files. Is there an easier way to get Google to remove all of the files from its search engine? If not, is there anything else you could advise us to do besides manually requesting Google to remove every single page? Thanks in advance.

    Read the article

  • Ubuntu 11.04 and OpenLDAP - where is the config?

    - by Tom SKelley
    I've been asked to setup a multimaster LDAP environment on Ubuntu 11.04 - instead of a single master server. I cloned the master server and recreated it into two VMs. I am trying to follow the instructions on the OpenLDAP documentation here: http://www.openldap.org/doc/admin24/replication.html and it talks about modifying the cn=config tree within LDAP. The subdirectory tree appears to be there at: /etc/ldap/slapd.d/ and a slapcat -b cn=config drops out a load of config information. When I try to connect using a browser and the admin bind credentials: ldapsearch -D '<adminDN>' -w <password> -b 'cn=config' I get: # extended LDIF # # LDAPv3 # base <> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 2 result: 32 No such object I don't see the config context when I connect via an LDAP browser either. I'm sure I'm missing something, but I can't see what it is!

    Read the article

  • Non-OEM Biometric Software?

    - by Iszi
    Most of us with fingerprint readers and such devices probably use the software provided by the vendor, to enable biometric OS login or single sign-on functionality. However, I've recently wondered if there is any third-party software that will do the same thing. This would be similar to how you don't need the manufacturer's software to use a scanner, printer, or webcam - you just use their drivers and your choice of software. Is there anything like this for fingerprint readers or other biometric devices? Free or Open Source projects are preferred, but I'd be interested in learning about any existing solutions regardless. I personally am particularly interested in Windows-compatible software, but I'll leave the query open for any OS.

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >