Search Results

Search found 14156 results on 567 pages for 'index maintenance'.

Page 296/567 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • links for 2011-02-18

    - by Bob Rhubart
    VirtualBox: Pre-Built Developer VMs "Learning your way around a new software stack is challenging enough without having to spend multiple cycles on the install process. Instead, we have packaged such stacks into pre-built Oracle VM VirtualBox appliances that you can download, install, and experience as a single unit." (tags: oracle virtualization virtualbox) Java Space on Parleys (The Java Source) "'Oracle partnered with Stephan Janssen, founder of Parleys to make this happen. Parleys website offers a user friendly experience to view online content. You can download some of the talks to your desktop or watch them on the go on mobile devices." (tags: oracle java parleys) Why ADF Developers Should Attend ODTUG This Year (Shay Shmeltzer's Weblog) Shay says: "A new track called the "Fusion Middleware" track has been formed and it has lots of sessions for any level of ADF developer. The track is run by several Oracle ACEs who are also involved in the ADF Enterprise Methodology Group." (tags: oracle otn odtug fusionmiddleware) Wrapping up an Exciting Mobile World Congress (The Java Source) "One of the more popular topics in our booth was the use of Java in the Smart Grid. In our booth we were showing off some of the work of the Hydra Consortium whose goal it is to leverage the emerging smart grid infrastructure to securely enable the delivery of personal health data..." (tags: oracle java smartgrid) How to Audit and Monitor BI Publisher Reports Access? (Oracle BI Publisher Blog) "Do you know who is accessing to which report at what time at your reporting environment ? As you delivered the BI Publisher reports to the production environment and your users start using them as part of their daily business operations you might wonder such questions." (tags: oracle otn businessintelligence) Oracle VM VirtualBox 4.0.4 Released! (Oracle's Virtualization Blog) Fat Bloke says: "Oracle made a maintenance update release of Oracle VM VirtualBox version 4.0.4 today. You can Download it now, or read about the changes in the ChangeLog." (tags: oracle otn virtualization virtualbox) Obama says Cloud and Data Center Consolidation Will Help Curb IT Costs | WHIR Web Hosting Industry News "In the report, he estimated that the federal government could reallocate some $20 billion of IT spending to cloud computing technologies and reduce 'data center infrastructure expenditure by approximately 30 percent' through cloud computing." (tags: cloud obama datacenter) Chris Muir: ADF BC: Creating an "EXISTS" View Criteria Oracle ACE Director Chris Muir shares some ADF tips. (tags: oracle otn oracleace adf) Translation and Multiple Languages with Oracle UCM | Bex Huff Bex says: "Last year, I gave a presentation at Oracle Open World about Creating and Maintaining an Internationalized Web Site. Well, I'm happy to announce that one of the several add-ons to UCM is now available for purchase!" (tags: oracle otn enterprise2.0 ecm oracleace) ORACLENERD: Design Documentation Oracle ACE Chet "ORACLENERD" Justice makes a pledge. (tags: oracle otn oracleace database)

    Read the article

  • Traffic fall after a server problem

    - by Sébastien
    I have a website from which I analyse the traffic with Google analytics. Day after day the traffic (mainly from Google SE) incresed until I get a problem with my server. For one day the server has been offline and after that I have no longer had as much users as I had before. Now it's like the site is no more referenced on Google index (but when I type "site:mysite.com", I still have all the results). Do you know if this is a normal behaviour and if the traffic will come back as before (the server has had problems two days ago) ?

    Read the article

  • BPEL 11.1.1.6 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations

    - by Steven Chan (Oracle Development)
    Service Oriented Architecture (SOA) integrations with Oracle E-Business Suite can either be custom integrations that you build yourself or prebuilt integrations from Oracle.  For more information about the differences between the two options for SOA integrations, see this previously-published certification announcement. There are five prebuilt BPEL business processes by Oracle E-Business Suite Release 12 product teams: Oracle Price Protection (DPP) Complex Maintenance, Repair & Overhaul (CMRO/AHL) Oracle Transportation Management (WMS, WSH, PO) Advanced Supply Chain Planning (MSC) Product Information Management (PIM/EGO) Last year we announced the certification of BPEL 11.1.1.5 for Prebuilt E-Business Suite 12.1.3 SOA integrations.  The five prebuilt BPEL processes have now been certified with Oracle BPEL Process Manager 11g version 11.1.1.6 (in Oracle Fusion Middleware SOA Suite 11g).  These prebuilt BPEL processes are certified with Oracle E-Business Suite Release 12.1.3 and higher. Note: The Supply Chain Trading Connector (CLN) product team has opted not to support BPEL 11g with their prebuilt business processes previously certified with BPEL 10.1.3.5.  If you have a requirement for that certification, I would recommend contacting your Oracle account manager to ensure that the Supply Chain team is notified appropriately.  For additional information about prebuilt integrations with Oracle E-Business Suite Release 12.1.3, please refer to the following documentation: Integrating Oracle E-Business Suite Release 12 with Oracle BPEL available in Oracle SOA Suite 11g (Note 1321776.1) Oracle Fusion Middleware 11g (11.1.1.6.0) Documentation Library Installing Oracle SOA Suite and Oracle Business Process Management Suite Release Notes for Oracle Fusion Middleware 11g (11.1.1.6) Certified Platforms Linux x86 (Oracle Linux 4, 5) Linux x86 (RHEL 5) Linux x86 (SLES 10) Linux x86-64 (Oracle Linux 4, 5, 6) Linux x86-64 (RHEL 5) Linux x86-64 (SLES 10)  Oracle Solaris on SPARC (64-bit) (9, 10, 11) HP-UX Itanium (11.23, 11.31) HP-UX PA-RISC (64-bit) (11.23, 11.31) IBM AIX on Power Systems (64-bit) (5.3, 6.1, 7) IBM: Linux on System z (RHEL 5, SLES 10) Microsoft Windows Server (32-bit) (2003, 2008)  Microsoft Windows x64 (64-bit) (2008 R2) Getting SupportIf you need support for the prebuilt EBS 12.1.3 BPEL business processes, you can log Service Requests against the Applications Technology Group product family. Related Articles BPEL 11.1.1.5 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations Webcast Replay Available: SOA Integration Options for E-Business Suite Securing E-Business Suite Web Services with Integrated SOA Gateway

    Read the article

  • Comparison of Extreme Programming (XP) to Traditional Programming Methodologies

    The comparison of extreme programming (XP) to traditional programming methodologies can find similarities between the historic biblical battle between David and Goliath. Goliath of Gath is a Philistine warrior renowned for his size, strength and battle tested skills. Much like Goliath, traditional methodologies are known to be cumbersome due to large amounts of documentation, and time consuming do to the time needed to gather all the information. However, traditional methodologies have been widely accepted by the software development community for years because of its attention to detail regarding project development and maintenance. David is a male Israelite teenager, who was small, fearless, and untrained in any type of formal combat. In a similar fashion, extreme programming focuses more on code over documentation so that time is spent on developing the project and not on cumbersome documentation of a project. Typically, project managers and developers are fearless when they start this type of project because they usually start with little to no documentation, and they expect to be given changes to be implemented at the start of every new project iteration. Because of the lack of need or desire for documentation in extreme programming projects they appear to act as if there is no formal process involved in developing an extreme programming project.  This is a misnomer, because of the consistent development iterations and interaction with clients and users the quickly takes form because each iteration allows the project to be refined as the customer needs and desires change. Ravikant Agarwal and David Umphress documented a new approach to extreme programming called personal extreme programming (PXP) at the ACM Southeast Regional Conference in 2008. PXP is the application of extreme programming core concepts in a single developer team environment.  PXP focuses on how to adjust the main concepts and practices of extreme programming that is typically centered in a group environment and how they can be altered to be beneficial for a single developer environment. Suzanne Smith and Sara Stoecklin are both advocates of extreme programming according to the Journal of Computing Sciences in Colleges and in fact they feel that it should receive more attention in introductory programming classes to allow students to better understand the software development process. Reasons why extreme programming is a good thing: Developers get to do more of what they love, Develop. Traditional software development methodologies tend to  add additional demands on a project by requiring all requirements and project specifications to be fully defined prior to the start of the implementation phase of a project. A standard 40 hour work week. With limiting the work week to only 40 hours prevents developers from getting burned out on projects.

    Read the article

  • Which tools you use to make gtk themes?

    - by tutuca
    I'm trying to make a new gtk theme using the murrine engine, using Humanity (default in ubuntu 9.10) as a template. You can grab the code in http://github.com/tutuca/themes However, I found cumbersome the process of creating a new theme with it. There is no central starting point. The documentation of both, the engine options (gtkrc's and stuff), and general theming practices (the format of the index.theme files, folders, bla bla) is scarce, How to's and tutorials are often old or subject to lots of opinionated debate and results confusing (to me, having a web developer background, at least :-). So... I wanted to ask to the fellows gtk themers and artist out there: Which tools you use to create a new theme, and how does your average workflow looks like?

    Read the article

  • install Cirrus Logic cs46xx (audio card) drivers

    - by Aikanáro
    I have two sounds cards, one is the on-board (it's VIA) the other is Cirrus Logic cs46xx. This is what lspci shows me: 04:04.0 Multimedia audio controller: Cirrus Logic CS 4614/22/24/30 [CrystalClear SoundFusion Audio Accelerator] (rev 01) It only show the cirrus logic, cause I disable the VIA card through BIOS. This page: http://es.driverscollection.com/?file_id=13152 gives me instructions to install it, but I can follow them because the folders indicates in the page do not matches with the ones that I see in my system. The alsa page: http://alsa-project.org/main/index.php/Matrix:Module-cs46xx, also give me instructions, but I don't understand it. For example, they say: type in a terminal: ./configure but don't say in what directory. I think that isn't instructions for begginers... Right now I can't heard anything. I decide to disable the VIA audio card, cause I've read they don't get along with linux, although i use the integrate VIA video card. I have ubuntu 11.10

    Read the article

  • Only 192.168.0.3 can request, but anyone can request /public/file.html

    - by mattalexx
    I have the following virtual host on my development server: <VirtualHost *:80> ServerName example.com DocumentRoot /srv/web/example.com/pub <Directory /srv/web/example.com/pub> Order Deny,Allow Deny from all Allow from 192.168.0.3 </Directory> </VirtualHost> The Allow from 192.168.0.3 part is to only allow requests from my workstation machine. I want to tweak this to allow anyone to request a certain URL: http://example.com/public/file.html How do I change this to allow /public/file.html requests to get through from anyone? Note: /public/file.html doesn't actually exist as a file on the server. I redirect all incoming requests through a single index file using mod_rewrite.

    Read the article

  • How do I get brightness controls working properly on an Eee PC 1001P?

    - by Terry
    Is there a solution to the low screen brightness issue with the Eee PC 1001P and release 12.04? When I use the brightness control, the screen goes through three adjustment cycles of dark to semi bright, but never gets to bright. As you index the control up, brightness increases, then suddenly cuts back to dark. use the brightness button to further increase the brightness and the same cycle happens. As though there are three distinct brightness events, each one setting back to low level. Under no circumstances other than initial boot up can you get to a bright screen. I just finished installing 12.04 on two Acer (Gateway netbooks) with no brightness issue. Just on the Eee PC 1001P Eee PC model is 1001P

    Read the article

  • Speed up Banshee's indexing of files on a device

    - by Stefano Palazzo
    I've got an external hard drive with music on it, around 250 albums. To make it work nicely with Banshee, I've created an .is_audio_player file on the device, containing audio_folders=Music. Every time I plug it in, Banshee takes around two minutes to index the thing, slowly building up the library - and being unusably sluggish while doing that. Is there, per chance, any way to speed it up? Should I not mount the hard disk as a music player, but add it's contents to my library? And, if I do, won't that give me lots of annoying X symbols next to the titles, as they can't be found sometimes? What's the best way to have my library on an external HDD?

    Read the article

  • Removing hard-coded values and defensive design vs YAGNI

    - by Ben Scott
    First a bit of background. I'm coding a lookup from Age - Rate. There are 7 age brackets so the lookup table is 3 columns (From|To|Rate) with 7 rows. The values rarely change - they are legislated rates (first and third columns) that have stayed the same for 3 years. I figured that the easiest way to store this table without hard-coding it is in the database in a global configuration table, as a single text value containing a CSV (so "65,69,0.05,70,74,0.06" is how the 65-69 and 70-74 tiers would be stored). Relatively easy to parse then use. Then I realised that to implement this I would have to create a new table, a repository to wrap around it, data layer tests for the repo, unit tests around the code that unflattens the CSV into the table, and tests around the lookup itself. The only benefit of all this work is avoiding hard-coding the lookup table. When talking to the users (who currently use the lookup table directly - by looking at a hard copy) the opinion is pretty much that "the rates never change." Obviously that isn't actually correct - the rates were only created three years ago and in the past things that "never change" have had a habit of changing - so for me to defensively program this I definitely shouldn't store the lookup table in the application. Except when I think YAGNI. The feature I am implementing doesn't specify that the rates will change. If the rates do change, they will still change so rarely that maintenance isn't even a consideration, and the feature isn't actually critical enough that anything would be affected if there was a delay between the rate change and the updated application. I've pretty much decided that nothing of value will be lost if I hard-code the lookup, and I'm not too concerned about my approach to this particular feature. My question is, as a professional have I properly justified that decision? Hard-coding values is bad design, but going to the trouble of removing the values from the application seems to violate the YAGNI principle. EDIT To clarify the question, I'm not concerned about the actual implementation. I'm concerned that I can either do a quick, bad thing, and justify it by saying YAGNI, or I can take a more defensive, high-effort approach, that even in the best case ultimately has low benefits. As a professional programmer does my decision to implement a design that I know is flawed simply come down to a cost/benefit analysis?

    Read the article

  • SEO for a list of products with filters

    - by dana
    I am a wondering if there is a recommended "best practice" for a product search SEO. I know to create a dynamic sitemap file that lists links to all products in the site. However, I want to implement a a bookmark-able "advanced search". Should I let search engines index any of the results? Take the following parameters for a search on a make believe used car website: minprice (minimum price in dollars) maxprice (maximum price in dollars) make (honda, audi, volvo) model (accord, A4, S40) minyear (minimum model year) maxyear (maximum model year) minmileage (minimum mileage) maxmileage (maximum mileage) Given these parameters, there could be an infinite number of search combinations: Price Between $10,000 and $20,000 /search?minprice=10000&maxprice&20000 Audis with less than 50k miles /search?model=audi&maxmileage=50000 More than 100,000 miles and less than $5,000 /search?minmileage=100000&maxprice=5000 etc. Over time, there may be inbound links to a variety of these types of searches, yet they are all slices of the same data. Should I allow for all of these searches to be indexed?

    Read the article

  • Configurable Objects - Introduction

    - by Anthony Shorten
    One of the interesting facilities in the framework is Configurable Object functionality (it is also known as Task Optimization and also known as Cool Tools). The idea is that any implementation can create their own views of the base product objects and services and implement functionality against those new views. For example, in Oracle Utilities Customer Care and Billing, there is a Person object. That object is used to store and manage information about individuals as well as companies. In the base product you would use the Person Maintenance screen and fill in some of the screen when you wanted to register or maintain and individual as well and fill out other parts of the screen when you wanted to register or maintain a company. This can be somewhat confusing to some customers. Using Configurable Objects this can be simplified. A business object can be created that is a view of the any object. For example, you could create a Human business object which would cover the aspects of the Person object pertaining to an individual and a Company business object to cover the aspects unique to a company. Even the tag names (i.e. Field Names) in the object can be changed to be more what the implementation is familiar with. The object can also restructure the object. For example, a common identifier for an individual in the USA is the Social Security number, this value is a Person Identifier (as this varies in each country). In the new Human object you can remap the Person Identifier as a Social Security number. To define a Business Object you use a schema editor built into the browser user interface and use a mapping language to setup the business objects. An example of the language is shown below in an extract of the schema for the Human business object. As you can see there are mapping as well as formatting and other tags. This information can be built manually or using a wizard which generates the base structure for you to alter. This is all stored as meta data when saved. Once a Business object is built it can be used as basis for code, other business objects (we support inheritance), called by a screen (called a UI Map) or even as a Web Service. This is just a start with Configurable Objects as you can also create views of base services called Business Services, Service Scripts used for non-object or complex object processing (as well as other things), UI Maps used for screens and Data Areas to reuse definitions across multiple objects. Configurable Objects are powerful and I only really touched on them here. Over the next few months I hope to add lots more entries about them.

    Read the article

  • Workflow with Flash Pro CS6 and FlashDevelop: Using fla and swc to store assets

    - by Arthur Wulf White
    I am using this tutorial: http://www.flashdevelop.org/wikidocs/index.php?title=AS3:FlexAndFlashCS3Workflow In the past older versions of Flash Pro I was able to complete these steps: right-click on the symbol in the Library panel, select "Linkage..." dialog, check "Export for ActionScript" and fill in the symbol name (ie. MySymbol_design or assets.MySymbol_design), do not change the base class (ie. flash.display.MovieClip). Right now, I am stuck at that part. Any hints? What I wish to do is: Use fla for the artist to store assets. Publish to swc Extract the assets in FlashDevelop by creating an instance of their class. ... How is this done in CS6? To clear things up, this is what I see when I right click a Flash symbol:

    Read the article

  • Preventing indexing duplicate content by search engines

    - by umesh awasthi
    I am in process of migrating my old domain (www.oldurl.com) to new domain (www.newurl.com). Almost all the content,URL structure as well database is same except for few URL's and only difference will be in the domain name. I have made entries in the Apache's .htaccess file to set 301 redirect and currently have blocked all search engines from crawling my new domain by setting in robot.txt file. I am not sure how i will handle the duplicate content issue as when i will make the new domain go live. Should i block search engines to index/crawl my old domain? i am new to this field and not sure if this is actually any duplicate content issue or not.

    Read the article

  • Error in mounting HDD

    - by Vikramjeet
    I am getting the following error whenever I mount my external HDD. It was working before and then I opted for safely removing the drive. Now its giving me following error Error mounting: mount exited with exit code 13: ntfs_mst_post_read_fixup_warn: magic: 0x43425355 size: 4096 usa_ofs: 8850 usa_count: 65535: Invalid argument Actual VCN (0x800006009000000) of index buffer is different from expected VCN (0x0). Failed to mount '/dev/sdb1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details.

    Read the article

  • Recovery from URL structure change?

    - by Dejan Pelzel
    in July this year, we have changed the URL structure of the website from: Post: domain.com/blog/post/986/dance/heart-beats-dance-video-by-chinatsu/ Category: domain.com/blog/index/cosplay/ to Post: domain.com/dance/heart-beats-dance-video-by-chinatsu-986/ Category: domain.com/cosplay/ Everything was (supposedly) properly redirected with 301 redirects and it first seemed that the traffic returned after a couple of days, but it has now been close to 2 months and things keep going worse although Google is slowly indexing the changes. What is worrying me even more is that the Pages crawled per day from Webmaster Tools started drastically dropping a few days ago and has just reached a new low in months (from over 2000 to 700). Should I be worried or will things sort out eventually?

    Read the article

  • problem with grub-efi

    - by Jesper
    I am installing ubuntu on my MacBook, following the instructions here: http://www.rodsbooks.com/ubuntu-efi/index.html Everything has gone well so far. But I have now come to number 19. The cd with GRUB2 is in the drive, but when I type 'sudo apt-get install grub-efi' it says: package grub-efi is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: grub2-common grub-common The Grub iso I downloaded and burned was this one: http://forja.cenatic.es/frs/download.php/1381/super_grub_disk_hybrid-1.98s1.iso

    Read the article

  • Oracle BI and EPM Partner Blogs

    - by Mike.Hallett(at)Oracle-BI&EPM
    Below is a simple list of some of our specialist Oracle BI and EPM Partner Blogs, where there is lots of great material and discussions.   http://www.aortabi.nl/news/ Netherlands http://www.clearpeaks.com/blog/ English http://www.peakindicators.com/index.php/knowledge-base English http://www.project.eu.com/blog/ English http://www.qubix.co.uk/insights English http://www.rittmanmead.com/blog/ English https://www.endecacommunity.com/ English   If you are a specialist OPN EMEA BI and EPM Partner with hints and tips to share, and would like your Blog to be added to this list, then just let me know @ [email protected].

    Read the article

  • Retrieving model position after applying modeltransforms in XNA

    - by Glen Dekker
    For this method that the goingBeyond XNA tutorial provides, it would be really convenient if I could retrieve the new position of the model after I apply all the transforms to the mesh. I have edited the method a little for what I need. Does anyone know a way I can do this? public void DrawModel( Camera camera ) { Matrix scaleY = Matrix.CreateScale(new Vector3(1, 2, 1)); Matrix temp = Matrix.CreateScale(100f) * scaleY * rotationMatrix * translationMatrix * Matrix.CreateRotationY(MathHelper.Pi / 6) * translationMatrix2; Matrix[] modelTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(modelTransforms); if (camera.getDistanceFromPlayer(position+position1) > 3000) return; foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.World = modelTransforms[mesh.ParentBone.Index] * temp * worldMatrix; effect.View = camera.viewMatrix; effect.Projection = camera.projectionMatrix; } mesh.Draw(); } }

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • Project Euler 2: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 2.  As always, any feedback is welcome. # Euler 2 # http://projecteuler.net/index.php?section=problems&id=2 # Find the sum of all the even-valued terms in the # Fibonacci sequence which do not exceed four million. # Each new term in the Fibonacci sequence is generated # by adding the previous two terms. By starting with 1 # and 2, the first 10 terms will be: # 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... # Find the sum of all the even-valued terms in the # sequence which do not exceed four million. import time start = time.time() total = 0 previous = 0 i = 1 while i <= 4000000: if i % 2 == 0: total +=i # variable swapping removes the need for a temp variable i, previous = previous, previous + i print total print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • How do you Install the Latest Release of Miro?

    - by Brenton Horne
    In the software centre the latest release of Miro available is 4.0.4 whereas the latest release of Miro is 5.0.4. How do I download 5.0.4 on 12.10? I have tried following the guide at http://www.getmiro.com/download/for-ubuntu/ (and thus have already run sudo add-apt-repository ppa:pcf/miro-releases) but it failed and when I tried to run sudo apt-get update I received the error: W: Failed to fetch http://ppa.launchpad.net/pcf/miro-releases/ubuntu/dists/quantal/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/pcf/miro-releases/ubuntu/dists/quantal/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Is creating a full application in Silverlight advisable?

    - by Anthony
    Is creating a huge public site fully in Silverlight really advisable? for eg. an ecommerce site. I don't want to start any debate but actually I feel Silverlight shouldn't be used for full website because the biggest loss you incur is of SEO. No search engines till today can parse the xap file and index it based on it's content. You can get around it by doing ifs and thens like if Silverlight is not supported then make an Asp.Net equivalent page for it but that only doubles our effort of making application, more than anything else. Why write double code in 2 applications meant for the same purpose. If that is the only option why not create Asp.Net application only. What are your views? Thanks in advance :)

    Read the article

  • Best way to prevent Google from indexing a directory [duplicate]

    - by Gkhan14
    This question already has an answer here: Stopping Google index some web pages I have 5 answers I've researched many methods on how to prevent Google/other search engines from crawling a specific directory. The two most popular ones I've seen are: Adding it into the robots.txt file: Disallow: /directory/ Adding a meta tag: <meta name="robots" content="noindex, nofollow"> Which method would work the best? I want this directory to remain "invisible" from search engines so it does not affect any of my site's ranking. In other words, I want this directory to be neutral/invisible and "just there." I don't want it to affect any ranking. Which method would be the best to achieve this?

    Read the article

  • apache permissions problem

    - by nishan
    Im running ubuntu 12.04 lts 2gb ram 500gb hdd. My hdd have 4 partitions. Partition 1 = 40 gb Windows (NTFS, lable = win32) Partition 2 = 320 gb Windows (FAT label = common) Partition 3 = 40 gb Ubuntu (EXT4) I installed apached2 now to change its default www directory, I used 'gksu gedit /etc/apache2/sites-enabled/000-default' and chaged to /media/common/www After all I run in terminal chmod 777 /media/common/www chmod 777 /media/common/www/. After that I type in firefox 127.0.0.1/index.php It says "Forbidden You don't have permission to access / on this server. Apache/2.2.22 (Ubuntu) Server at 127.0.0.1 Port 80" Before my changes it was working fine. How should i run my websites???

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >