Search Results

Search found 12803 results on 513 pages for 'lucene index'.

Page 261/513 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • Which tools you use to make gtk themes?

    - by tutuca
    I'm trying to make a new gtk theme using the murrine engine, using Humanity (default in ubuntu 9.10) as a template. You can grab the code in http://github.com/tutuca/themes However, I found cumbersome the process of creating a new theme with it. There is no central starting point. The documentation of both, the engine options (gtkrc's and stuff), and general theming practices (the format of the index.theme files, folders, bla bla) is scarce, How to's and tutorials are often old or subject to lots of opinionated debate and results confusing (to me, having a web developer background, at least :-). So... I wanted to ask to the fellows gtk themers and artist out there: Which tools you use to create a new theme, and how does your average workflow looks like?

    Read the article

  • install Cirrus Logic cs46xx (audio card) drivers

    - by Aikanáro
    I have two sounds cards, one is the on-board (it's VIA) the other is Cirrus Logic cs46xx. This is what lspci shows me: 04:04.0 Multimedia audio controller: Cirrus Logic CS 4614/22/24/30 [CrystalClear SoundFusion Audio Accelerator] (rev 01) It only show the cirrus logic, cause I disable the VIA card through BIOS. This page: http://es.driverscollection.com/?file_id=13152 gives me instructions to install it, but I can follow them because the folders indicates in the page do not matches with the ones that I see in my system. The alsa page: http://alsa-project.org/main/index.php/Matrix:Module-cs46xx, also give me instructions, but I don't understand it. For example, they say: type in a terminal: ./configure but don't say in what directory. I think that isn't instructions for begginers... Right now I can't heard anything. I decide to disable the VIA audio card, cause I've read they don't get along with linux, although i use the integrate VIA video card. I have ubuntu 11.10

    Read the article

  • Only 192.168.0.3 can request, but anyone can request /public/file.html

    - by mattalexx
    I have the following virtual host on my development server: <VirtualHost *:80> ServerName example.com DocumentRoot /srv/web/example.com/pub <Directory /srv/web/example.com/pub> Order Deny,Allow Deny from all Allow from 192.168.0.3 </Directory> </VirtualHost> The Allow from 192.168.0.3 part is to only allow requests from my workstation machine. I want to tweak this to allow anyone to request a certain URL: http://example.com/public/file.html How do I change this to allow /public/file.html requests to get through from anyone? Note: /public/file.html doesn't actually exist as a file on the server. I redirect all incoming requests through a single index file using mod_rewrite.

    Read the article

  • How do I get brightness controls working properly on an Eee PC 1001P?

    - by Terry
    Is there a solution to the low screen brightness issue with the Eee PC 1001P and release 12.04? When I use the brightness control, the screen goes through three adjustment cycles of dark to semi bright, but never gets to bright. As you index the control up, brightness increases, then suddenly cuts back to dark. use the brightness button to further increase the brightness and the same cycle happens. As though there are three distinct brightness events, each one setting back to low level. Under no circumstances other than initial boot up can you get to a bright screen. I just finished installing 12.04 on two Acer (Gateway netbooks) with no brightness issue. Just on the Eee PC 1001P Eee PC model is 1001P

    Read the article

  • Speed up Banshee's indexing of files on a device

    - by Stefano Palazzo
    I've got an external hard drive with music on it, around 250 albums. To make it work nicely with Banshee, I've created an .is_audio_player file on the device, containing audio_folders=Music. Every time I plug it in, Banshee takes around two minutes to index the thing, slowly building up the library - and being unusably sluggish while doing that. Is there, per chance, any way to speed it up? Should I not mount the hard disk as a music player, but add it's contents to my library? And, if I do, won't that give me lots of annoying X symbols next to the titles, as they can't be found sometimes? What's the best way to have my library on an external HDD?

    Read the article

  • Workflow with Flash Pro CS6 and FlashDevelop: Using fla and swc to store assets

    - by Arthur Wulf White
    I am using this tutorial: http://www.flashdevelop.org/wikidocs/index.php?title=AS3:FlexAndFlashCS3Workflow In the past older versions of Flash Pro I was able to complete these steps: right-click on the symbol in the Library panel, select "Linkage..." dialog, check "Export for ActionScript" and fill in the symbol name (ie. MySymbol_design or assets.MySymbol_design), do not change the base class (ie. flash.display.MovieClip). Right now, I am stuck at that part. Any hints? What I wish to do is: Use fla for the artist to store assets. Publish to swc Extract the assets in FlashDevelop by creating an instance of their class. ... How is this done in CS6? To clear things up, this is what I see when I right click a Flash symbol:

    Read the article

  • Error in mounting HDD

    - by Vikramjeet
    I am getting the following error whenever I mount my external HDD. It was working before and then I opted for safely removing the drive. Now its giving me following error Error mounting: mount exited with exit code 13: ntfs_mst_post_read_fixup_warn: magic: 0x43425355 size: 4096 usa_ofs: 8850 usa_count: 65535: Invalid argument Actual VCN (0x800006009000000) of index buffer is different from expected VCN (0x0). Failed to mount '/dev/sdb1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details.

    Read the article

  • Preventing indexing duplicate content by search engines

    - by umesh awasthi
    I am in process of migrating my old domain (www.oldurl.com) to new domain (www.newurl.com). Almost all the content,URL structure as well database is same except for few URL's and only difference will be in the domain name. I have made entries in the Apache's .htaccess file to set 301 redirect and currently have blocked all search engines from crawling my new domain by setting in robot.txt file. I am not sure how i will handle the duplicate content issue as when i will make the new domain go live. Should i block search engines to index/crawl my old domain? i am new to this field and not sure if this is actually any duplicate content issue or not.

    Read the article

  • SEO for a list of products with filters

    - by dana
    I am a wondering if there is a recommended "best practice" for a product search SEO. I know to create a dynamic sitemap file that lists links to all products in the site. However, I want to implement a a bookmark-able "advanced search". Should I let search engines index any of the results? Take the following parameters for a search on a make believe used car website: minprice (minimum price in dollars) maxprice (maximum price in dollars) make (honda, audi, volvo) model (accord, A4, S40) minyear (minimum model year) maxyear (maximum model year) minmileage (minimum mileage) maxmileage (maximum mileage) Given these parameters, there could be an infinite number of search combinations: Price Between $10,000 and $20,000 /search?minprice=10000&maxprice&20000 Audis with less than 50k miles /search?model=audi&maxmileage=50000 More than 100,000 miles and less than $5,000 /search?minmileage=100000&maxprice=5000 etc. Over time, there may be inbound links to a variety of these types of searches, yet they are all slices of the same data. Should I allow for all of these searches to be indexed?

    Read the article

  • Recovery from URL structure change?

    - by Dejan Pelzel
    in July this year, we have changed the URL structure of the website from: Post: domain.com/blog/post/986/dance/heart-beats-dance-video-by-chinatsu/ Category: domain.com/blog/index/cosplay/ to Post: domain.com/dance/heart-beats-dance-video-by-chinatsu-986/ Category: domain.com/cosplay/ Everything was (supposedly) properly redirected with 301 redirects and it first seemed that the traffic returned after a couple of days, but it has now been close to 2 months and things keep going worse although Google is slowly indexing the changes. What is worrying me even more is that the Pages crawled per day from Webmaster Tools started drastically dropping a few days ago and has just reached a new low in months (from over 2000 to 700). Should I be worried or will things sort out eventually?

    Read the article

  • problem with grub-efi

    - by Jesper
    I am installing ubuntu on my MacBook, following the instructions here: http://www.rodsbooks.com/ubuntu-efi/index.html Everything has gone well so far. But I have now come to number 19. The cd with GRUB2 is in the drive, but when I type 'sudo apt-get install grub-efi' it says: package grub-efi is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: grub2-common grub-common The Grub iso I downloaded and burned was this one: http://forja.cenatic.es/frs/download.php/1381/super_grub_disk_hybrid-1.98s1.iso

    Read the article

  • How do you Install the Latest Release of Miro?

    - by Brenton Horne
    In the software centre the latest release of Miro available is 4.0.4 whereas the latest release of Miro is 5.0.4. How do I download 5.0.4 on 12.10? I have tried following the guide at http://www.getmiro.com/download/for-ubuntu/ (and thus have already run sudo add-apt-repository ppa:pcf/miro-releases) but it failed and when I tried to run sudo apt-get update I received the error: W: Failed to fetch http://ppa.launchpad.net/pcf/miro-releases/ubuntu/dists/quantal/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/pcf/miro-releases/ubuntu/dists/quantal/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Oracle BI and EPM Partner Blogs

    - by Mike.Hallett(at)Oracle-BI&EPM
    Below is a simple list of some of our specialist Oracle BI and EPM Partner Blogs, where there is lots of great material and discussions.   http://www.aortabi.nl/news/ Netherlands http://www.clearpeaks.com/blog/ English http://www.peakindicators.com/index.php/knowledge-base English http://www.project.eu.com/blog/ English http://www.qubix.co.uk/insights English http://www.rittmanmead.com/blog/ English https://www.endecacommunity.com/ English   If you are a specialist OPN EMEA BI and EPM Partner with hints and tips to share, and would like your Blog to be added to this list, then just let me know @ [email protected].

    Read the article

  • Project Euler 2: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 2.  As always, any feedback is welcome. # Euler 2 # http://projecteuler.net/index.php?section=problems&id=2 # Find the sum of all the even-valued terms in the # Fibonacci sequence which do not exceed four million. # Each new term in the Fibonacci sequence is generated # by adding the previous two terms. By starting with 1 # and 2, the first 10 terms will be: # 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... # Find the sum of all the even-valued terms in the # sequence which do not exceed four million. import time start = time.time() total = 0 previous = 0 i = 1 while i <= 4000000: if i % 2 == 0: total +=i # variable swapping removes the need for a temp variable i, previous = previous, previous + i print total print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Is creating a full application in Silverlight advisable?

    - by Anthony
    Is creating a huge public site fully in Silverlight really advisable? for eg. an ecommerce site. I don't want to start any debate but actually I feel Silverlight shouldn't be used for full website because the biggest loss you incur is of SEO. No search engines till today can parse the xap file and index it based on it's content. You can get around it by doing ifs and thens like if Silverlight is not supported then make an Asp.Net equivalent page for it but that only doubles our effort of making application, more than anything else. Why write double code in 2 applications meant for the same purpose. If that is the only option why not create Asp.Net application only. What are your views? Thanks in advance :)

    Read the article

  • Best way to prevent Google from indexing a directory [duplicate]

    - by Gkhan14
    This question already has an answer here: Stopping Google index some web pages I have 5 answers I've researched many methods on how to prevent Google/other search engines from crawling a specific directory. The two most popular ones I've seen are: Adding it into the robots.txt file: Disallow: /directory/ Adding a meta tag: <meta name="robots" content="noindex, nofollow"> Which method would work the best? I want this directory to remain "invisible" from search engines so it does not affect any of my site's ranking. In other words, I want this directory to be neutral/invisible and "just there." I don't want it to affect any ranking. Which method would be the best to achieve this?

    Read the article

  • Retrieving model position after applying modeltransforms in XNA

    - by Glen Dekker
    For this method that the goingBeyond XNA tutorial provides, it would be really convenient if I could retrieve the new position of the model after I apply all the transforms to the mesh. I have edited the method a little for what I need. Does anyone know a way I can do this? public void DrawModel( Camera camera ) { Matrix scaleY = Matrix.CreateScale(new Vector3(1, 2, 1)); Matrix temp = Matrix.CreateScale(100f) * scaleY * rotationMatrix * translationMatrix * Matrix.CreateRotationY(MathHelper.Pi / 6) * translationMatrix2; Matrix[] modelTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(modelTransforms); if (camera.getDistanceFromPlayer(position+position1) > 3000) return; foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.World = modelTransforms[mesh.ParentBone.Index] * temp * worldMatrix; effect.View = camera.viewMatrix; effect.Projection = camera.projectionMatrix; } mesh.Draw(); } }

    Read the article

  • apache permissions problem

    - by nishan
    Im running ubuntu 12.04 lts 2gb ram 500gb hdd. My hdd have 4 partitions. Partition 1 = 40 gb Windows (NTFS, lable = win32) Partition 2 = 320 gb Windows (FAT label = common) Partition 3 = 40 gb Ubuntu (EXT4) I installed apached2 now to change its default www directory, I used 'gksu gedit /etc/apache2/sites-enabled/000-default' and chaged to /media/common/www After all I run in terminal chmod 777 /media/common/www chmod 777 /media/common/www/. After that I type in firefox 127.0.0.1/index.php It says "Forbidden You don't have permission to access / on this server. Apache/2.2.22 (Ubuntu) Server at 127.0.0.1 Port 80" Before my changes it was working fine. How should i run my websites???

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • How does bing-bot( is that the right spider-name? ) and googlebot interpret 301 redirect?

    - by jbcurtin
    I've been looking for documentation on how the Microsoft and Google bots interpret 301 redirects. It seems that google-bot stores documents on a url based index system. But I haven't been able to figure out how bing works. Should I assume that they are still working towards coping everyone else and assume they use an algorithm close to google? Is it best to just forward a page to a new location via Javascript? I think this might be a blackhat trick, but how would I tell the bots that it's not? Is 301 redirect my best option and I just have to bit the bullet because said pages are no longer in existence? What other options do I have that I might not be aware of?

    Read the article

  • How to remove duplicate content, which is still indexed, but not linked to anymore?

    - by David
    A bug in the tool, which we use to create search-engine-friendly URLs changed our whole URL-structure overnight, and we only noticed after Google already indexed the page. Now, we have a massive duplicate content issue, causing a harsh drop in rankings. Webmaster Tools shows over 1,000 duplicate title tags, so I don't think, Google understands what is going on. Right URL: abc.com/price/sharp-ah-l13-12000-btu.html Wrong URL: abc.com/item/sharp-l-series-ahl13-12000-btu.html (created by mistake) After that, we ... Changed back all URLs to the "Right URLs" Set up a 301-redirect for all "Wrong URLs" a few days later Now, still a massive amount of pages is in the index twice. As we do not link internally to the "Wrong URLs" anymore, I am not sure, if Google will re-crawl them very soon. What can we do to solve this issue and tell Google, that all the "Wrong URLs" now redirect to the "Right URLs"? Best, David

    Read the article

  • Oracle Solaris Studio Express 6/10 and its Customer Feedback Program are now available

    - by pieter.humphrey
    Oracle Solaris Studio Express 6/10 and the Customer Feedback Program for it are now available. Oracle Solaris Studio Express 6/10 is available on Solaris 10 (SPARC, x86), OEL 5 (x86), RHEL 5 (x86), SuSE 11 (x86) today and will be available for OpenSolaris in the near future. New feature highlights since the last release include: C/C++/Fortran compiler optimizations for the latest UltraSPARC and SPARC64-based architectures such as UltraSPARC T2 and SPARC64 VII C/C++/Fortran compiler optimizations for the latest x86 architectures including the Intel Xeon 7500 processor series (Nehalem-EX) and the Intel Xeon 5600 processor series (Westmere-EP) Enhanced debugging and code coverage tooling Improved application profiling with the Performance Analyzer Updated IDE based on NetBeans 6.8 To find more information and download go to http://developers.sun.com/sunstudio/downloads/express/ To participate in the customer feedback program for Oracle Solaris Studio Express 6/10 go to http://developers.sun.com/sunstudio/customerfeedback/index.jsp Please get the word out, try out this new release and send us your feedback! Technorati Tags: developer,development,solaris,sparc,Oracle Solaris Studio,Solaris Studio,Sun Studio,oracle,otn del.icio.us Tags: developer,development,solaris,sparc,Oracle Solaris Studio,Solaris Studio,Sun Studio,oracle,otn

    Read the article

  • FTP file access problem

    - by Fahad Uddin
    I recently got a malware on my website. I have made the backup of the website on my computer and trying to wipe off my FTP. I am trying to delete the root folder but getting this error message on all of the malicious files, Response: 550 Could not delete index.php: Permission denied I am the sole administrator of the ftp so permission should not have been an issue. My host provider seems not to suffer from this problem as his websites are running well without any malware. I have also tried to change the root to 777 to see if the file permission change could help me delete the files but still I am getting the same error. Please help out. Thanks

    Read the article

  • library put in /usr/local/lib is not loaded

    - by IARI
    Let me state in advance: One might think this question would is for server fault, but I think is is Ubuntu (config) specific. In short: I have put libwkhtmltox.so in /usr/local/lib as stated in installation instructions linked below, but it appears the library is not loaded. I am trying to install php-wkhtmltox, a php extension for wkhtmltox on my local desktop (Ubuntu 12.04). I have extracted the source and changed to the corresponding directory. After running phpize, ./configure fails at checking for libwkhtmltox support... yes, shared not found configure: error: Please install libwkhtmltox I suspect the reason the library is not loaded is that the path is not checked!? how do I proceed? Here are instructions I followed: http://davidbomba.com/index.php/2011/08/04/php-wkhtmltox/ http://roundhere.net/journal/install-wkhtmltopdf-php-bindings/

    Read the article

  • SEO and unique IDs in urls

    - by kokoko
    I have a web site who's home page is at http://domain.com when a new user first hits the web site I'm creating a unique id of 5 chars for them (for example 'abcde' and redirecting them to http://domain.com/abcde so they can later bookmark and return to their workspace. My question is what's the best approach for SEO purposes, I need the main url domain.com to be indexed but google will also get the redirect and will not index the main page. I know about canonical urls, but this applies only when the domain.com url does not redirect also should I use 301 or 302 code in the redirect ?

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >