Search Results

Search found 30894 results on 1236 pages for 'best practice'.

Page 756/1236 | < Previous Page | 752 753 754 755 756 757 758 759 760 761 762 763  | Next Page >

  • Windows Server - Dual NIC Bandwidth Pooling

    - by tsilb
    I have a Windows Server 2008 machine with dual NICs. Both are plugged into the same switch in a typical one-switch, one-gateway home network. This server is used almost exclusively for inbound connections. It hosts a web server (IIS 6), SQL server, and file server (via LAN UNC paths and mapped drives). How do I make best use of inbound bandwidth across both NICs? For example, if I connect to it by hostname and one of the interfaces has high traffic, I'd like the new connection to use the other interface.

    Read the article

  • How do you author HDR content?

    - by Nathan Reed
    How do you make it easy for your artists to author content for an HDR renderer? What kinds of tools should you provide, and what workflows need to change, in going from LDR to HDR? Note that I'm not asking about the technical aspects of implementing an HDR renderer, but about best practices for creating materials and lighting in HDR. I've googled around a bit, but there doesn't seem to be much about this topic on the web. Can anyone point me to some good resources on this, or share their own experiences? Some specific points: Lighting - how can lighting artists pick HDR light colors? Do they have a standard LDR color picker and then a multiplier? Is the multiplier in gamma or linear space? Maybe instead of a multiplier it's a log-luminance? Or a physical brightness level, like the number of lumens? How will they know what multiplier/luminance/brightness is "correct" for a given light? Materials - how can texture artists make emissive color maps, such as neon signs, TV screens, skyboxes, etc? Can you paint one as a regular LDR (8-bit-per-channel) image and apply a multiplier (or log-luminance, etc.)? Are there cases where it's necessary to actually paint HDR images? If so, how do you go about this in Photoshop (or other software)?

    Read the article

  • Rip authedicatation from LDAP to Local

    - by oxinabox
    We are taking a small portion of out network offline, and running a separate network using that portion. (By small portion I mean 2 servers, that will be connected to 30 odd boxs that aren't usually part of our network, and don't need to authenicate) I intend to create a VM on one of the servers to provide general user services, and IRC server, remote shell etc. And I would like the users to be able to use there usual server log in details. Problem is the LDAP server that normally checks those details is not one of the severs. So I need to be able to some how take their details off LDAP and put them on the the server that is coming. One suggestion I had was to set a LDAP server on the VM locally, and clone the LDAP database onto it (using something called slapcat) is this the best way? Or can I I change the LDAP data into local authentication data?

    Read the article

  • korgac - init.d kill script on shutdown

    - by Max Magnus
    I'm new to Ubuntu 12.04 and Linux and my English is not the best, so I'm sorry for incorrect or stupid questions. I've installed KOrganizer and to start the reminder when I boot the system, I added the korgac command to the autostart. This works fine. But now, every time I want to reboot or shutdown my system, there appears a message that tells me that an unknown process is still running... so I have kill it manually before reboot/shutdown. I knew that it is the korgac process that causes this problem, so I decided to create an init.d script. I've created a script, put it into init.d, and created 2 symbolic links: to rc0.d and to rc6.d. The name starts with K10script... (I hope it is correct so). K10korgac_kill: #! /bin/sh pkill korgac exit 0 Unfortunately this wasn't able to resolve my problem. Maybe my script is wrong. I hope someone can help me. Thanks for your time Max

    Read the article

  • How to add LDAP user to existing local group in RHEL?

    - by Highway of Life
    I'm attempting to add some of our LDAP users to a locally defined group on our RHEL server, however I get an error stating that the LDAP user is not found in /etc/passwd. What would be the best way to allow LDAP users to be added to local groups? My feeling is that this must be done manually. I could edit: /etc/group and add the LDAP group to the list. Would that be ideal? [server]# id apache uid=409(apache) gid=409(apache) groups=409(apache) context=user_u:system_r:unconfined_t:s0 [server]# id john.doe uid=11389(john.doe) gid=6097(ABC_Corporate_US) groups=6097(ABC_Corporate_US) context=user_u:system_r:unconfined_t:s0 [server]# /usr/sbin/usermod -a -G apache john.doe usermod: john.doe not found in /etc/passwd OS: RHEL (Red Hat Enterprise Linux Server release 5.3 (Tikanga)) Note: Updating the OS on this machine is not an option.

    Read the article

  • Free Oracle Special Edition eBooks - Cloud Architecture & Enterprise Cloud

    - by Thanos
    Cloud computing can improve your business agility, lower operating costs, and speed innovation. The key to making it work is the architecture. Learn how to define your architectural requirements and get started on your path to cloud computing with the free oracle special edition e-book, Cloud Architecture for Dummies.   Topics covered in this quick reference guide include: Cloud architecture principles and guidelines Scoping your project and choosing your deployment model Moving toward implementation with vertically integrated engineered systems Learn how to architect and model your cloud implementation to drive efficiency and leverage economies of scale. For more information, visit oracle.com/cloud and our cloud services at cloud.oracle.com Specifically Infrastructure as a Service (IaaS) is critical to the success of many enterprises. Want to build a private Cloud infrastructure and cut down IT costs? Learn more about Oracle's highly integrated infrastructure software and hardware to help you architect and deploy a cloud infrastructure that is optimized for the needs of your enterprise from day one. Download the free e-book of Enterprise Cloud Infrastructure for Dummies to: Realize the benefits of consolidation with the added cloud capabilities Simplify deployments and reduce risks with tested and proven guidelines Achieve up to 50% lower TCO than comparable multi-vendor alternatives Choosing the right infrastructure technologies is essential to capitalizing on the benefits of cloud computing. Oracle Optimized Solution for Enterprise Cloud Infrastructure helps identify the right hardware and software stack and provides configuration guidelines for your cloud. With this book, you come to understand Enterprise Cloud Infrastructure and find out how to jumpstart your IaaS cloud plans. You also discover Oracle Optimized Solutions and learn how integration testing and proven best practices maximize your IT investments. In addition, you see how to architect and deploy your IaaS cloud to drive down costs and improve performance, how to understand and select the right private cloud strategy for you, what key cloud infrastructure elements are and how to use them to achieve your business goals, and more. For more information, visit oracle.com/oos.

    Read the article

  • Tidbits of goodness - Podcasts, REST, JSON

    - by jeff.x.davies
    I've been quiet for a while, busy with a variety of projects. I did want to let you all know about a couple of things going on. First, I have been participating in architectural podcasts with Bob Rhubart. If you are interested in hearing these short (about 10 minutes each) recordings where a group of us discuss enterprise architecture and its future, check out http://blogs.oracle.com/archbeat/2010/05/podcast_show_notes_evolving_en.html Next, I have been working on the public sample code for the Oracle Service Bus 11g release. I'm now expanding my samples to include SCA, BPEL and the Oracle Adapters. This is really great experience for me because I have been learning these other tools to a deeper level and this provides insight into developing better solutions. You know the old saying, "If the only tool you have is a hammer, you tend to appraoch every problem as if it were a nail." However, I'm not the only one working on these samples. We have alot of our best and brightest working on sample code for the 11g release. Take a look at https://soasamples.samplecode.oracle.com/ to see all of the samples for SOA Suite 11g A reader wrote to me and asked me about using OSB to return information in JSON format. I don't have a sample posted for this yet, but I am working on getting one packaged up. In the mean time I can tell you that it is dead simple to do in OSB. Use the instructions I gave in an earlier blog entry on creating REST services using OSB, specify Messaging Service as the service type that takes a Text message and returns a Text message. Then have the OSB proxy service return a JSON formatted string (by replacing the contents of the $body variable with the JSON text) and you're done! This approach allows you to use OSB services from within Javascript/AJAX seamlessly. As I get more samples posted to the OTN site, I'll let you know. I have lots of interesting stuff on the way.

    Read the article

  • SEO - Index images (lazyload)

    - by Guilherme Nascimento
    Note:My question is not about Javascript. I'm developing a plugin for jQuery/Mootols/Prototype, that work with DOM. This plugin will be to improve page performance (better user experience). The plugin will be distributed to other developers so that they can use in their projects. How does the lazyload: The images are only loaded when you scroll down the page (will look like this: http://www.appelsiini.net/projects/lazyload/enabled_timeout.html LazyLoad). But he does not need HTML5, I refer to this attribute: data-src="image.jpg" Two good examples of website use LazyLoad are: youtube.com (suggested videos) and facebook.com (photo gallery). I believe that the best alternative would be to use: <A href="image.jpg">Content for ALT=""</a> and convert using javascript, for this: <IMG alt="Content for ALT=\"\"" src="image.jpg"> Then you question me: Why do you want to do that anyway? I'll tell you: Because HTML5 is not supported by any browser (especially mobile) And the attribute data-src="image.jpg" not work at all Indexers. I need a piece of HTML code to be fully accessible to search engines. Otherwise the plugin will not be something good for other developers. I thought about doing so to help in indexing: <noscript><img src="teste.jpg"></noscript> But noscript has negative effect on the index (I refer to the contents of noscript) I want a plugin that will not obstruct the image indexing in search engines. This plugin will be used by other developers (and me too). This is my question: How to make a HTML images accessible to search engines, which can minimize the requests?

    Read the article

  • Powershell overruling Perl binmode?

    - by hippietrail
    I have a Perl script which creates a binary file while scanning a very large text file. It outputs to STDOUT which I redirect in the commandline to a file. To optimize it I'm making changes then seeing how low it takes to run. On Linux for this I use the "time" command. On Windows the best way to time a program seemed to be to PowerShell's "measure-command". This seemed to work fine but I noticed the generated files were larger. On examination I found that the files generated from within PowerShell begin with a BOM and contain CRLF pairs! My Perl script has a "binmode STDOUT" directive and does work correctly in a normal dosbox. Is this a bug or misfeature in PowerShell or measure-command? Has it affected others creating binary files by means other than Perl? Googling hasn't turned anything up so far. I'm using Perl 5.12, PowerShell v1.0 and Windows XP.

    Read the article

  • Life and Career guidance

    - by Andrei TheGiant Haxtor
    Hello programmers. I have a current dilemma I'm pondering over. I will be graduating from high school with ~60 credits worth of community college work (pre-engineering courses), and I am wondering what would experienced programmers suggest I do with my time since I have all of the bull courses out of the way. Should I start taking computer science/engineering courses or should I take some other courses that interest me?(psych, math) The reason I am asking this is, well , I like doing a lot of self studying, especially relating to software and tech. I don't like to have the pressure of hard classes on me, so I could make up for the time lost doing the CC courses and dive deep in programming and books. I've started getting into programming recently unfortunately, since I didn't have much time b/c of my course load. Right now I am doing Java and messing around with android. I would like to get involved in web&mobile development, operating systems, and finance software. If any of you experienced people could please give me some guidance and words of wisdom, I would greatly appreciated. Sorry that this isn't necessarily related to programming. All the best.

    Read the article

  • Advice for Setting up an On-Call Team

    - by Ciaran Archer
    I'm leading a largish development team (~35 developers). We are doing primarily Web Development work on a number of sites. Historically the knowledge on the teams has been pretty siloed. If you worked on Site A you will know how to troubleshoot it, but you would not be a lot of help on Site B. We also have a few cross-cutting concerns, i.e. common components used between sites which require specialized knowledge to troubleshoot. With all this in mind, I'm trying to understand the best way to setup an on-call team. This would be a team of programmers who would be available to deal with out-of-hours emergency issues occasionally (say one call every 2 weeks). They may be required to deploy emergency fixes. Part of me is saying we can't have a big on-call team with shallow knowledge, instead we need a smaller team with deep knowledge who can expect to be on-call more often and remunerated as such. Does anyone have any suggestions based on experience on how to setup this team? Thanks in advance.

    Read the article

  • Combine Multiple Audio Files into a single higher-quality audio File

    - by namenlos
    BACKGROUND My team gave a demo to a large audience - we recorded the audio of the demo in multiple locations in the room (3) the audio was recorded using cheap laptop microphones I was not involved in the recording of the audio or the demo Both audio files suck in some form the first one is of a recording near the speaker - which clearly gets his voice but the the audience is audience is muffled - also this one is slightly noisy The second recording was done in the middle of the audience - it gets the audience questions clearly but actually gets the speaker rather sometimes well and sometimes poorly (not all the speakers spoke loudly enough to be heard) MY QUESTION Is there any techinque or software which can be used to merge these audio files in such a way that the best qualities of each are preserved. I am NOT asking now to simply merge them together in one track - I've already done that in Audacity and it is certainly better - what I am looking for could be considered closer to how HDR images are created - multiple exposures combined into an enhanced new version which is not simply an average of the inputs. NOTE Am not an "Audio" guy - just a normal user

    Read the article

  • How to keep word document, html and pdf documentation aligned

    - by dendini
    Is there a way to write documentation in a WYSIWYG editor which can then export into HTML, WORD and PDF and keep copies synchronized? This documentation are mostly technical notes and some contextual help for some softwares so they must contain images and some styling, they are not programmer's documentation (API list or functions list) for which probably a program like Javadoc or Doxygen would be the best choice. For example how do companies with hundreds different software lines and thousands of programmers deal with this? I have several solutions but they all seem lacking in some aspect: Latex/Tex : very good pdf and html export, not very user friendly and no full-blown WYSIWYG editor available. LibreOffice/OpenOffice : full blown WYSIWYG editor however html export not so good (need to edit manually exported html which needs to be maintained separately ) Mediawiki or any other wiki : could be keeping documentation in wikitext format, so html is automatically generated, pdf exportation is quite good with many available plugins. Again however need some formation for the staff to use it and need to setup a server for this. Notice I'm not asking for software A vs software B, I'm asking for general advice, big companies procedures for documentation and yes some software product names if available.

    Read the article

  • How to Share Links Between Any Browser and Any Smartphone

    - by Justin Garrison
    It happens all the time, you find an article to read but then nature calls. Do you take your laptop with you? With site to phone you can share links between any browser and any smartphone with a single click. If you have Android you may be familiar with this functionality with Google’s Chrome to phone, or with webOS’ Neato! But what if you have an iPhone, Blackberry or Windows Phone 7 device? That is where site to phone comes in handy. It not only supports every major mobile smartphone operating system, but it also supports every major web browser Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor These 8-Bit Mario Wood Magnets Put Video Games on Your Fridge Christmas Themes 4 Pack for Chrome and Iron Browser Enjoy the First Total Lunar Eclipse in 372 Years This Evening Gmail’s Free Calling Extended Through 2011 Voice Search Brings Android-Style Voice Search to Google Chrome X-Mas Origins: Santa – Fun X-Men and Santa Mashup [Video]

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Segfault with rtorrent on Debian Lenny

    - by digital
    Hi, My debian lenny server keeps segfaulting with rtorrent, it happens once every 24 hours. Libcurl has been recompiled to the latest version and it still seems to happen. I'm not the best when it comes to linux server admin but if you require more info about the system I'll try and get it for you. lib/rtorrent are 0.8.5/0.12.5 Any help would be appreciated as I'd like rtorrent up 24/7 Caught Segmentation fault, dumping stack: 0 rtorrent [0x439686] 1 rtorrent [0x43e06a] 2 /lib/libc.so.6 [0x7f73ce780f60] 3 /usr/lib/libcurl.so.4 [0x7f73d04f4431] 4 /usr/lib/libcurl.so.4 [0x7f73d04f47da] 5 /usr/lib/libcurl.so.4(curl_multi_remove_handle+0x341) [0x7f73d050acb1] 6 rtorrent [0x480221] 7 rtorrent [0x482915] 8 /usr/local/lib/libtorrent.so.11 [0x7f73d02b1f95] 9 /usr/local/lib/libtorrent.so.11 [0x7f73d02b1fea] 10 /usr/local/lib/libtorrent.so.11 [0x7f73d02b4cfc] 11 rtorrent [0x48058a] 12 rtorrent [0x439f49] 13 /lib/libc.so.6(__libc_start_main+0xe6) [0x7f73ce76d1a6] 14 rtorrent(_ZNSt8ios_base4InitD1Ev+0x71) [0x40ea99]

    Read the article

  • parameters in a seo url

    - by Marius
    This should be a very simple question for seo experts. Let's say we have the following URLs: http://www.test.com/some-sort-of-page http://www.test.com/some-sort-of-page?pgid1189 http://www.test.com/some-sort-of-page/page/1189 http://www.test.com/page/1189/some-sort-of-page The first one is an ideal solution. What i need to do is to somehow pass a resource identifier in the url to know exactly what this url is pointing to, since it can be pointing to a lot of different things. In the second URL, "pgid" specifies that the resource is a "page". URLs 3 and 4 specify the same thing differently. I do not care if the URL is friendly to people, because, let's face it - 99.9% of people will never ever ever bother to remember such url no matter how "friendly" it is. So the question is: which of the last 3 URLs would be the best solution for search engines? My guess is it would be the 2nd with query string, but i might be wrong. Thanks for your thoughts P.S. please don't offer using the first url. There's no problem using it, but the question is not about that.

    Read the article

  • When did Red Hat start shipping PHP 5.3 with 5.x?!?

    - by Jason
    Okay this is a PSA more than a question because I know the answer: January 13, 2011. See: https://rhn.redhat.com/errata/RHEA-2011-0069.html Colour me surprised though, didn't hear anything about in the blogosphere until I got a Security Errata notice today. I have been using the REMI repo for this in the past but will switch over to the Red Hat blessed PHP 5.3. Don't down-vote me bro! I'll select as the best answer the source that broke the news first (other than Red Hat of course). People have wanted this for so long I'm just amazed that it's finally happened!

    Read the article

  • How Do I Print Photos?

    - by Takkat
    Other than for Windows in Ubuntu there are no fancy utilities provided from printer manufacturers to print photos. I am aware of Gnome Photo Printer and of Photoprint, the first being easy to handle, the latter having more options. However I wonder if there are any other or maybe even better alternatives (including plugins) to perform the following tasks: Print photos in the best photo-resolution the driver offers Adjust paper size for standard values of photo papers Choose paper tray if the printer has more than one Print out multiple photos on one page including mixed sizes (grids) Multiple prints with same settings Borderless printing if the printer is capable of this Any additional options like pre-processing for color correction or noise reduction would be nice to have but are not so essential. Update According to this spec it seems not to so easy to accomplish the simple task of printing photos. Indeed all applications I have gone through have major drawbacks that make printing photos almost impossible. Below I will list what put me off using them for photo printing: Gnome Photo Printer: no thumbnails, no grids Photoprint: does not keep settings, GUI broken, no standard photo size, no thumbs Eye Of Gnome: no multiple pages, no grids Gimp + Images Grid Layout: far too many steps to finally find that prints are always different to their previews. F-Spot: no grids Picasa 3: no grids, very few fixed paper sizes, 300 dpi only flPhoto: strange GUI, no thumbs, no printer settings, did not print at all Windows: Ooops - everything works fine! But I want Ubuntu to do this! After half a pack of ink cartridges and half a pack of photo paper cards I am getting tired of testing. At least Gimp and Picasa looked promising but both don't keep their promise when it comes to printing. I'd already be happy to quickly print a few photos with EOG if bug #80220 was fixed - but it's still on "wishlist".

    Read the article

  • Why doesn't Unity's OnCollisionEnter give me surface normals, and what's the most reliable way to get them?

    - by michael.bartnett
    Unity's on collision event gives you a Collision object that gives you some information about the collision that happened (including a list of ContactPoints with hit normals). But what you don't get is surface normals for the collider that you hit. Here's a screenshot to illustrate. The red line is from ContactPoint.normal and the blue line is from RaycastHit.normal. Is this an instance of Unity hiding information to provide a simplified API? Or do standard 3D realtime collision detection techniques just not collect this information? And for the second part of the question, what's a surefire and relatively efficient way to get a surface normal for a collision? I know that raycasting gives you surface normals, but it seems I need to do several raycasts to accomplish this for all scenarios (maybe a contact point/normal combination misses the collider on the first cast, or maybe you need to do some average of all the contact points' normals to get the best result). My current method: Back up the Collision.contacts[0].point along its hit normal Raycast down the negated hit normal for float.MaxValue, on Collision.collider If that fails, repeat steps 1 and 2 with the non-negated normal If that fails, try steps 1 to 3 with Collision.contacts[1] Repeat 4 until successful or until all contact points exhausted. Give up, return Vector3.zero. This seems to catch everything, but all those raycasts make me queasy, and I'm not sure how to test that this works for enough cases. Is there a better way?

    Read the article

  • eSTEP TechCast - December 2011 - Solaris 11 - The First Cloud OS

    - by uwes
    Dear partner, we are pleased to announce our next eSTEP TechCast on Thursday 1st December and would be happy if you could join. Please see below the details for the next TechCast. Date and time: Thursday, 01. December 2011, 11:00 - 12:00 GMT (12:00 - 13:00 CET) Abstract: Solaris 11 contains many new features, particularly around improved virtualisation and network performance. Additionally, new software packaging for fool-proof upgrades, higher availability and reduced maintenance windows replace the former SRV4 packaging and upgrade/patching methods. Target audience: Tech Presales Speaker: Andrew Gabriel Call Info: Call-in-toll-free number: 08006948154 (United Kingdom) Call-in-toll-free number: +44-2081181001 (United Kingdom) Show global numbers Conference Code: 803 594 3 Security Passcode: 9876 Webex Info (Oracle Web Conference) Meeting Number: 597 686 322Meeting Password: tech2011 Playback / Recording / Archive: The webcasts will be recorded and will be available shortly after the event in the eSTEP portal under the Events tab, where you could find also material from already delivered eSTEP TechCasts. Use your email-adress and PIN: eSTEP_2011 to get access. Feel free to have a look. We are happy to get your comments and feedback. Thanks and best regards, Partner HW Enablement EMEA

    Read the article

  • Android Software for the SysAdmin on the move.

    - by GruffTech
    So my company has over service through Verizon, and AT&T Service in the area is "shoddy" at its best, so i haven't been able to join the "iPhone party" like so many of my fellow SysAdmins have been able to. That being said, this week finally i phone i like has hit Verizon, the HTC Incredible. (I've been waiting for the Desire or Nexus One, but after seeing spec sheets and reviews, HTC Incredible comes out ahead anyway). So (finally) I'm looking for Android Apps that are "gotta-haves" for System Admins. I've found the bottom three, If there's others you prefer over these let me know. RDP Program - RemoteRDP SSH Client - ConnectBot Nagios - NagMonDroid Reply with your favorite Android App and Why!

    Read the article

  • How to correct time on Windows PDC server without affecting logons

    - by Kieran Walsh
    I know how to set an authoritative time server in Server 2008 R2. That's not what this question is. I want to know how I can change the time on a network where the PDC (and therefore everything) is a month out of date? I know that a 5 minute difference in time between clients and the domain prevents logons, so just changing the time on the PDC will break everything. What is the best way to fix this? Thanks Kieran.

    Read the article

  • Oracle VM 3: New Patch Set! (or Mega Millions winner?...you decide!..)

    - by Adam Hawley
    Today, my favorite number is 14736185 (despite the fact that it did not win me $249million in the MegaMillions lottery...or did it?)!  Why?  Because it is our latest patch release and it is chock-full of good stuff for the Oracle VM 3.0 user.  Oracle VM support customers can find it on My Oracle Support as patch number 14736185.   This can be installed on Oracle VM 3.0.x systems as an incremental patch on top of 3.0.3, so if you previously ran 3.0.3 GA or updated to 3.0.3 patch 1 ( build 150) this will just apply on top.  We're recommending you update to this patch set at your earliest convenience.  For more details, see below but also see Wim Coekaert's blog with related info here. Oracle VM Manager Update Instructions Oracle VM Manager 3.0.2 or 3.0.3 can be upgraded to this Oracle VM Manager 3.0.3 patch update. Unzip the patch file on the server running Oracle VM Manager and execute the runUpgrader.sh script. # ./runUpgrader.sh Please refer to Oracle VM Installation and Upgrade Guide for details. Upgrade Oracle VM Servers It's highly recommended to update Oracle VM Server 3.0.3 with the latest patch update. Please review Oracle VM 3.0.3 User Guide http://docs.oracle.com/cd/E26996_01/e18549/BABDDEGC.html for specific instructions how to use Yum repository to perform the server update. To receive notification on the software update delivered to Oracle Unbreakable Linux Network (ULN, http://linux.oracle.com) for Oracle VM, you can sign up here http://oss.oracle.com/mailman/listinfo/oraclevm-errata.  Additional Information Oracle VM documentation is available on the Oracle Technology Network (OTN):http://www.oracle.com/technetwork/server-storage/vm/documentation/index.html  Please refer to the Oracle VM 3.0.3 Release Notes for a list of features and known issues. For the latest information, best practices white papers and webinars, please visit http://oracle.com/virtualization

    Read the article

  • auto-summarization: classful vs classless routing protocols

    - by yorble
    Suppose a router R1 is directly connected to the following subnets: 10.1.0.0/24 10.1.1.0/24 10.1.2.0/24 10.1.3.0/24 If it is running RIPv1, it will advertise: "i have the network 10.0.0.0" (implicitly understood by receiving RIPv1 routers as 10.0.0.0/8 because the protocol is classful) but suppose we changed the routing protocol to RIPv2 and turned ON auto-summarization. Would it behave in the same way? Would it advertise: "i have the network 10.0.0.0" (advertised WITHOUT subnet mask, and implicitly understood by other routers as 10.0.0.0/8) OR would it auto-summarize in a non classful way like: "i have 10.1.0.0/22" (advertised as network id and subnet mask pair) In other words, does turning on auto-summarization in RIPv2 (or other classless routing protocols) cause it to auto-summarize in a classful manner or simply auto-summarize classlessly to the best of its ability?

    Read the article

< Previous Page | 752 753 754 755 756 757 758 759 760 761 762 763  | Next Page >