Search Results

Search found 21061 results on 843 pages for 'bulid process'.

Page 471/843 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • Making The EBS Upgrade From 11.5.10 Easier - Part III

    - by Annemarie Provisero
    ADVISOR WEBCAST: Making The EBS Upgrade From 11.5.10 Easier - Part III PRODUCT FAMILY: E-Business Suite July 19, 2011 at 8 am PT, 9 am MT, 11 am ET This one-hour session is recommended for technical users who are responsible for upgrading their E-Business Suite applications from Release 11.5.10 to Release 12.1.x. As you begin your upgrade process, there are a number of tools available to assist you in a successful upgrade. A successful upgrade requires careful planning, correct upgrade processing, detailed testing, and user (re)training prior to upgrade. Over three sessions we will discuss the tools that you can use to assist in your upgrade tasks. These tools are available to you via My Oracle Support and as part of the E-Business Suite product offerings. In this third session, we’ll cover the Best Practices for Using The Upgrade Tools. Additionally, this session includes an extended question and answer period. In the first part of the three-session series, we covered the following topics: Overview of Tools Available for Upgrading Upgrade versus Re-implementing Upgrade Community Upgrade Product Information Center Page Detailed Look at Upgrade Advisor In the second session, we covered the following topics: Recap of Part I Detailed Look at Maintenance Wizard Detailed Look at Patch Wizard A replay of those sessions is available via Note 740964.1, Advisor Webcast Archive. A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • What is the impact of Windows 8 with UEFI on normal users?

    - by Sam
    I am a normal man-in-the-street computer user and so do not really understand what this is about, but I want to. Can someone please explain to me if: The Windows 8/UEFI secure boot thing will make it impossible to run normal/legacy applications in Windows 8 (as they will be unsigned)? It will turn Windows into an Apple-like system where only Microsoft approved applications can be run? As I say, I'm a normal user, and that is the overall impression I have from reading all the blogs, etc about it. If, on the other hand, all it does is make sure the system is booting a signed OS, how does this prevent malware (which is what at least two Microsoft blogs that I read seemed to be saying), given that most malware is not part of the boot process? The only way I can see this making sense is if it is ensuring that all OS components are signed. Is that it? Like I say, I'm a mortal, so please don't get technical on me, but rather explain how it will affect me, the user.

    Read the article

  • Should I Use PHP as FastCGI?

    - by Synetech inc.
    Hi, I am running an Apache webserver on my Windows machine. It is not generally a public server (most of the little bit of traffic comes from the machine itself, and most of the public traffic comes from crawlers). Basically, it is mostly just for use as a test-bed, development system. I have read about how running PHP as FastCGI is better (ie faster and more stable) than as an Apache module. However, I really don’t like the idea of multiple PHP.exe processes (I don’t like that Apache has two processes and I’m not even too thrilled with Chromium’s multi-process model). So I’m wondering if it would be worthwhile to change PHP to FastCGI for this scenario. If it is, how would I configure it? Pretty much all of the information I have seen has been either for non-Windows or for IIS. As I said, I’m running Windows+Apache. Thanks a lot.

    Read the article

  • ArrayIndexOutOfBounds exception in CoyoteAdapter.normalize()

    - by Alex
    I'm working with an application that uses Tomcat 5.0.28 for sending and receiving AS2 messages. At times, it's throwing the following exception on receiving an MDN receipt for a transmission: An exception or error occurred in the container during the request processing java.lang.ArrayIndexOutOfBoundsException: 0 at org.apache.coyote.tomcat5.CoyoteAdapter.normalize(CoyoteAdapter.java:483) at org.apache.coyote.tomcat5.CoyoteAdapter.postParseRequest(CoyoteAdapter.java:239) at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:158) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:799) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:705) at org.apache.tomcat.util.net.TcpWorkerThread.runIt(PoolTcpEndpoint.java:577) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683) at java.lang.Thread.run(Unknown Source) I've found a report of this issue regarding v. 5.0.25 (here), with a followup note that it was resolved in 5.0.27. However, as above, the version number used in this app is 5.0.28. Any suggestions for how to find out what might be triggering this error?

    Read the article

  • Handling early/late/dropped packets for interpolation in a 3D multiplayer game

    - by Ben Cracknell
    I'm working on a multiplayer game that for the purposes of this question, is most similar to Team Fortress. Each network data packet will contain the 3D position of the target moving object. (this object could be another player) The packets are sent on a fixed interval, and linear interpolation will be used to smooth the transition between packets. Under normal circumstances, interpolation will occur between the second-to-last packet, and the last packet received. The linear interpolation algorithm is the same as this post: Interpolating positions in a multiplayer game I have the same issue as in that post, but the answers don't seem like they will work in my situation. Consider the following scenario: Normal packet timing, everything is okay The next expected packet is late. That's okay, we'll just extrapolate based on previous positions The late packet eventually arrives with corrections to our extrapolation. Now what do we do with its information? The answers on the above post suggest we should just interpolate to this new packet's position, but that would not work at all. If we have already extrapolated past that point in time, moving back would cause rubber-banding. The issue is similar in the case of an early or dropped packet. So I believe what I am looking for is some way to smoothly deal with new information in an ongoing interpolation/extrapolation process. Since I might be moving on to quadratic or even cubic interpolation, it would be great if the same solutiuon could be applied to those as well.

    Read the article

  • How do I create my own programming language and a compiler for it

    - by Dave
    I am thorough with programming and have come across languages including BASIC, FORTRAN, COBOL, LISP, LOGO, Java, C++, C, MATLAB, Mathematica, Python, Ruby, Perl, JavaScript, Assembly and so on. I can't understand how people create programming languages and devise compilers for it. I also couldn't understand how people create OS like Windows, Mac, UNIX, DOS and so on. The other thing that is mysterious to me is how people create libraries like OpenGL, OpenCL, OpenCV, Cocoa, MFC and so on. The last thing I am unable to figure out is how scientists devise an assembly language and an assembler for a microprocessor. I would really like to learn all of these stuff and I am 15 years old. I always wanted to be a computer scientist someone like Babbage, Turing, Shannon, or Dennis Ritchie. I have already read Aho's Compiler Design and Tanenbaum's OS concepts book and they all only discuss concepts and code in a high level. They don't go into the details and nuances and how to devise a compiler or operating system. I want a concrete understanding so that I can create one myself and not just an understanding of what a thread, semaphore, process, or parsing is. I asked my brother about all this. He is a SB student in EECS at MIT and hasn't got a clue of how to actually create all these stuff in the real world. All he knows is just an understanding of Compiler Design and OS concepts like the ones that you guys have mentioned (i.e. like Thread, Synchronization, Concurrency, memory management, Lexical Analysis, Intermediate code generation and so on)

    Read the article

  • Interpreting Munin graphs showing available entropy and MySQL slow queries in sync

    - by user64204
    We're experiencing performance issues on our website, and after reviewing our munin graphs, the only metrics we've found in sync are Available entropy and MySQL slow queries, with the latter influenced by our number of logged in users: Based on the wikipedia entropy page, my understanding is that entropy is the amount of randomness (here measured in bytes) that the system can use for various tasks, mainly cryptography and functions that require random input. Since the peaks in available entropy and MySQL slow queries are occurring in sync and at regular interval, that the number of MySQL slow queries is proportional to our number of Drupal users whereas the peaks in available entropy seem to be much more constant and less proportional to these 2 metrics, we're thinking available entropy is the reflect of a root cause which, combined with the traffic to our website, is causing those slow queries (and not the opposite, slow queries influencing the entropy). Accordingly: Q: What underlying problem do you think could cause regular peaks in available entropy that could have an influence on MySQL's ability to process queries?

    Read the article

  • local install of wp site brought down from host - home page is ok but other pages redirect to wamp config page

    - by jeff
    local install of wp site brought down from host - home page is ok but other pages redirect to wamp config page. I got all local files from host to www dir under local wamp. I got database from host and loaded to new local db and used this tool to adjust site_on_web.com to "localhost/site_on_local" now the home page works great and can login to admin page but when click on reservations page and others of site then site just goes to the wamp server config page even though the url shows correctly as localhost/site_on_local/reservations my htaccess file is this # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress and rewrite-module is checked in the php-apache-apache modules setting. now when I uncheck the rewrite-module is checked in the php-apache-apache modules setting or I clear out the whole htaccess file then the pages just goto Not Found The requested URL /ritas041214/about-ritas/ was not found on this server. Please help as I am unsure now about my process to move local site up and down and be able to make it work and without this I am lost...

    Read the article

  • yum not able to install a package

    - by shadyabhi
    [root@mypc yum.repos.d]# yum search perl-Locale-gettext Loaded plugins: dellsysid, fastestmirror Repository tmz-puppet is listed more than once in the configuration Loading mirror speeds from cached hostfile * atomic: www6.atomicorp.com * base: mirror.trouble-free.net * epel: mirrors.tummy.com * extras: eq-centosrepo.hopto.org * rpmforge: mirror.hmc.edu * updates: mirror.team-cymru.org =================================================================== N/S Matched: perl-Locale-gettext ==================================================================== perl-Locale-gettext.x86_64 : Internationalization for Perl Name and summary matches only, use "search all" for everything. [root@mypc yum.repos.d] And [root@mypc yum.repos.d]# yum install perl-Locale-gettext Loaded plugins: dellsysid, fastestmirror Repository tmz-puppet is listed more than once in the configuration Loading mirror speeds from cached hostfile * atomic: mir01.syntis.net * base: mirrors.gigenet.com * epel: mirror.us.leaseweb.net * extras: centos.mirror.lstn.net * rpmforge: mirror.hmc.edu * updates: centos.mirror.choopa.net Setting up Install Process Nothing to do [root@mypc yum.repos.d]# What is going wrong here?

    Read the article

  • Can we increase Torrent share ratio using Local Peer Discovery?

    - by Jagira
    I just want to know whether this is a flaw or not in Bittorrent system. Let us assume that I am member of a Private Torrent site which requires me to maintain a specific upload to download ratio. Will this work: I create a torrent of a large file say [ Fedora Linux ~ 4 GB ] and upload it to the tracker I download the same torrent using my ID and start it on another machine on LAN or a Virtual machine Both clients have Local Peer Discovery enabled, so they will find 'em [ not via DHT ] and start x'ferring data using LAN bandwidth at LAN speeds. Though both uploads and downloads will increase, my ratio will also increase If I reiterate the entire process 'n' times, the numerator in the "RATIO" i.e Upload will become so large that the effect of downloads on ratio will become less. I want to know whether this is legitimate???

    Read the article

  • Split a 2D scene in layers or have a z coordinate

    - by Bane
    I am in the process of writing a 2D game engine, and a dilemma emerged. Let me explain the situation... I have a Scene class, to which various objects can be added (Drawable, ParticleEmitter, Light2D, etc), and as this is a 2D scene, things will obviously be drawn over each other. My first thought was that I could have basic add and remove methods, but I soon realized that then there would be no way for the programmer to control the order in which things were drawn. So I can up with two options, each with its pros and cons. A) Would be to split the scene in layers. By that I mean instead of having the scene be a container of objects, have it be a container of layers, which are in turn the containers of objects. B) Would require to have some kind of z-coordinate, and then have the scene sorted so objects with lower z get drawn first. Option A is pretty solid, but the problem is with the lights. In what layer do I add it? Does it work cross-layer? On all bottom layers? And I still need the Z coordinate to calculate the shadow! Option B would require me to change all my code from having Vector2D positions, to some kind of class that inherits from Vector2D and adds a z coordinate to it (I don't want it to be a Vector3D because I still need all the same methods the 2D kind has, just with .z clamped on). Am I missing something? Is there an alternative to these methods? I'm working in Javascript, if that makes a difference.

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • Can I make Apache drop a connection when matching a URL?

    - by PP
    Using mod_rewrite I can construct a rule to respond with a clean error code (e.g. 404 not found, 410 gone, or 403 unauthorised) when a page is requested that I don't want to serve. But frequently I get completely erroneous requests from hackers scanning my website for vulnerabilities or possibly cross-site scripting attempts. For these customers I do not want to return a clean error - I'd rather do something else like immediately drop the connection with no response or, alternatively, hold the connection open for a lengthy period of time to frustrate the automated process. Any ideas how to accomplish this with Apache? I've read that nginx has the ability to immediately terminate a connection when a particular pattern is matched.

    Read the article

  • Is there an application that can do a blue screen effect with a webcam?

    - by Axxmasterr
    Background: This is not the blue screen of death I am speaking of but the process called "Blue Screening" that takes and removes a particular colored background from an image so that it can be superimposed on some other video/still picture. If you have ever seen the weatherman stand in front of the map, then you have seen someone doing a blue screen technique. I would like to be able to capture video from my webcam, then send that video to a blue screen program which removes the white (or other color) from the background and then inserts a background of my own choosing. (think of the dead guy in freejack who was calling from all of the different places on earth) Then once the image is superimposed, I would like to pipe it into Skype for video conferencing. Anyone have a good way to do this?

    Read the article

  • Very Cool &ndash; Miami 311 System for tracking citizen service requests (Windows Azure, Silverlight

    - by Jim Duffy
    Having grown up in South Florida this short, but very enlightening, video explaining how the City of Miami has implemented a 311 citizen service request system using Windows Azure, Silverlight and Bing Maps definitely caught my attention. Miami311 The Miami311 System is a Windows Azure/Silverlight-based solution which enables City of Miami citizens report and track issues reported to city management. The system uses Bing Maps to plot the location and relevant information about each issue reported. Citizens now have the ability to easily see the status of the issue without having to call the city office. What I found interesting were a couple of benefits that a metropolitan area such as Miami can take advantage of in Windows Azure cloud-based solution. For the city of Miami, both benefits center around the weather. Of course the threat of a hurricane is a real issue in South Florida and what better way to make sure your site stays up during a hurricane then to have the site hosted far away from the eye of the storm. Using a Windows Azure cloud-based architecture the City of Miami is able to host the application within the Microsoft data centers safely away from any hurricane passing through South Florida. The second benefit is the inherent scalability of a Windows Azure based solution. During a severe weather event like thunderstorms or even worse, a hurricane, downed trees and power lines are a commonly reported problem. Being able to quickly scale up the computing resources required to handle the spike in citizens reporting these types of problems on the site is a huge benefit. Once the weather event has passed and downed tree reports begin to subside they can quickly reverse the process and scale the system back down to pre-storm levels. It’s kind of day-to-day kind of stuff but very cool stuff nonetheless. Have a day. :-|

    Read the article

  • Merge two PDF files containing even and odd pages of a book

    - by Yurij73
    I have two searchable PDF documents, say even.pdf and odd.pdf which contain even and odd pages of a book, respectively. I can decompile each PDF to separate files 001.pdf 002.pdf 003.pdf, et cetera. The question is how to merge them? They are both even and odd sequences numbered 1, 2, 3. If the numbering in the decompile process with pdftk were different, e.g. 1, 3, 5 for even and 2, 4, 6 for odd instead of 1, 2, 3, 4, I could simply merge them. Can I do this any other way?

    Read the article

  • What's a good entity hierarchy for a 2D game?

    - by futlib
    I'm in the process of building a new 2D game out of some code I wrote a while ago. The object hierarchy for entities is like this: Scene (e.g. MainMenu): Contains multiple entities and delegates update()/draw() to each Entity: Base class for all things in a scene (e.g. MenuItem or Alien) Sprite: Base class for all entities that just draw a texture, i.e. don't have their own drawing logic Does it make sense to split up entities and sprites up like that? I think in a 2D game, the terms entity and sprite are somewhat synonymous, right? But I do believe that I need some base class for entities that just draw a texture, as opposed to drawing themselves, to avoid duplication. Most entities are like that. One weird case is my Text class: It derives from Sprite, which accepts either the path of an image or an already loaded texture in its constructor. Text loads a texture in its constructor and passes that to Sprite. Can you outline a design that makes more sense? Or point me to a good object-oriented reference code base for a 2D game? I could only find 3D engine code bases of decent code quality, e.g. Doom 3 and HPL1Engine.

    Read the article

  • How to Archive, Search, and View Your Tweet Statistics with ThinkUp

    - by YatriTrivedi
    Worried about archiving your tweets? Want a more powerful search? Want to see your tweet statistics? You can do all of that and more by installing ThinkUp on your home server. ThinkUp is a brilliant application (currently in beta) that will archive all of your tweets, your replies, responses, etc. so that you can search through them and find out some helpful usage statistics. It has quite a few plugins, including one that adds full Facebook support, too. It’s designed to be installed on a LAMP server; that is, Linux, Apache, MySQL, and PHP is what will provide the backbone for it. While it’s possible to install it on a Windows- or Mac-based machine, it’s most easily handled in Linux, so we’ll be using Ubuntu to show you how to get it up and running. It’s in very active development by the founder, Gina Trapani, and by many users in the community Latest Features How-To Geek ETC How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop What is the Internet? From the Today Show January 1994 [Historical Video] Take Screenshots and Edit Them in Chrome and Iron Using Aviary Screen Capture Run Android 3.0 on a Hacked Nook Google Art Project Takes You Inside World Famous Museums Emerald Waves and Moody Skies Wallpaper Change Your MAC Address to Avoid Free Internet Restrictions

    Read the article

  • How do I start Ubuntu without X server?

    - by Kaare Mikkelsen
    So, I'm trying to install the official nVidia drivers for my fancy graphics card, and they advice disabling the X server before installing, as well as making sure that I can boot without the X server, so as not to wreck anything. However, I seem to be doing something wrong. As I understand it, this should be as simple as changing the runlevel from 2 to 1? (I am aware that all this may simply be me not understanding runlevels) If that is correct, a quick test should be simply typing "sudo init 1" or "sudo telinit 1" in a terminal? Doing that makes the system attempt to shutdown, only it stops at the purple screen with the ubuntu logo and 5 white dots underneath. I haven't observed it get anywhere from there, I always end up holding down the power button. "sudo telinit 3" has not visible effect. Alternatively, I should be able to get there using the recovery mode, activated through the grub menu? I have very little success with that. After picking recovery mode, I am faced with a set of options about how to proceed. Both choosing the one with "network enabled" and "text only", I get a dialog explaining that this will mount my / file system in read/write mode, and whether this is what I want. I choose yes, and it seems to report that my drive is fine (there's a single line of text detailing the state of the partition). And then it stops. I haven't tried letting it sit for more than a few minutes, but presumably this process should be comparable in duration to a regular boot? I am not particularly fond of messing with any .conf-files until I am certain that I can handle things with training wheels on. So, I guess there are two questions: the one in the title, and "how do I start a text-only session without changing defaults?" Thanks in advance :)

    Read the article

  • Infrastructure to effectively set up experiements and learn from them

    - by David
    Open-org.com is in the early stages of creating our first product, a place on the web, where one can ask lawyers questions at a fraction of their normal costs. An early stage front page can be found here. I got inspired by this video, which is recommended by Jeff Atwood, which talks about getting feedback faster, which is the reason for this question. The problem Needless to say, we want our conversion rates to be as high as possible. Therefore, we want to be able to rapidly set up a new experiment where we change something on the site (like moving an image slightly, rewriting a sentence etc.). We then want to present the modified page to a random subset of the users. After that we will compare the conversion rates of the experiment with another version. I could very well imagine that we want to run 10-100 experiments simultaneously and it would be nice to have features, where experiments that obviously have worse results will be ended before schedule. My question Does infrastructure to support the whole process exist? A short description of our infrastructure... We use EC2 and PHP and have a script to automatically start up new instances with all needed software. Still, starting up a new server for every experiment, seems like a bit of overkill, so I am wondering what other options exist. Btw. If you feel like working for Open-org.com, you can pick a task, and start working, or suggest a new task. All profits are given out to the contributors.

    Read the article

  • Google Image Search Quick Fix

    - by Asian Angel
    Are you tired of unneeded webpage loading and extra link clicking just to access an image found using Google Image Search? Now you can jump directly to the image itself with the clickGOOGLEview extension for Google Chrome. The Problem When you find an image that you like using Google Image Search you always have to go through extra hassle just to get to the image itself. First you have an entire webpage loading in your browser and then you have to click through that irritating “See full size image” link. All that you need is the image, right? Problem Fixed Once you have installed the clickGOOGLEview extension you will absolutely love the result. Find an image that you like, click the link, and there is your new image without any of the hassle or extra link clicking. Big or small having direct access to the image is how it should have been from the beginning. Conclusion The clickGOOGLEview extension does one thing and does it extremely well…it gets you to those images without the extra hassle or additional link clicking. Links Download the clickGOOGLEview extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Make Firefox Quick Search Use Google’s Beta Search KeysChange Internet Explorer in Windows Vista to Search Google by DefaultMake Firefox Built-In Search Box Use Google’s Experimental Search KeysQuick Tip: Show PageRank in Firefox while Google Toolbar is HiddenQuick Tip: Use Google Talk Sidebar in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher

    Read the article

  • Z-order with Alpha blending in a 3D world

    - by user41765
    I'm working on a game in a 3D world with 2D sprites only (like Don't Starve game). (OpenGL ES2 with C++) Currently, I'm ordering elements back to front before drawing them without batch (so 1 element = 1 drawcall). I would like to implement batching in my framework to decrease draw calls. Here is what I've got for the moment: Order all elements of my scene back to front. Send order list of elements to the Renderer. Renderer look in his batch manager if a batch exist for the given element with his Material. Batch didn't exist: create a new one. Batch exist for element with this Material: Add sprite to the batch. Compute big mesh with all sprite for each batch (1 material type = 1 batch). When all batches are ok, the batch manager compute draw commands for the renderer. Renderer process draw commands (bind shader, bind textures, bind buffers, draw element) Image with my problem here: Explication here But I've got some problems because objects can be behind another objects inside another batch. How can I do something like that? Thanks!

    Read the article

  • Brocken package for libavcodec54 & libx264-123 in ubuntu 14.04LTS

    - by Kachavarapu Ajay
    $ sudo apt-get install -f [sudo] password for ajay: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libx264-123 The following NEW packages will be installed: libx264-123 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/345 kB of archives. After this operation, 1,005 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 166965 files and directories currently installed.) Preparing to unpack .../libx264-123_0.123.2189+git35cf912-1ubuntu4_amd64.deb ... Unpacking libx264-123:amd64 (2:0.123.2189+git35cf912-1ubuntu4) ... dpkg-deb (subprocess): decompressing archive member: lzma error: compressed data is corrupt dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing archive /var/cache/apt/archives/libx264-123_0.123.2189+git35cf912-1ubuntu4_amd64.deb (--unpack): cannot copy extracted data for './usr/lib/x86_64-linux-gnu/libx264.so.123' to '/usr/lib/x86_64-linux-gnu/libx264.so.123.dpkg-new': unexpected end of file or stream Errors were encountered while processing: /var/cache/apt/archives/libx264-123_0.123.2189+git35cf912-1ubuntu4_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • QA & Testing with UPK

    - by dan.gallo(at)oracle.com
    Most customers know that UPK produces both the word and excel based test scripts for UAT. Did you know that you can use UPK for QA review and bug tracking? To use UPK for QA, create content and assign it appropriately to authorized reviewers. Then have them open the developer, use customized views to find content assigned to them quickly and check out the topics. Then they can use the topic editor to review the content and provide comments right into the bubbles or use explanation frames. It makes QA-ing content this way easier than publishing and sending out .tpcs or docs for people to review. How about UPK for bug tracking? The hardest part about fixing bugs in software is reproducing the error! When you use UPK for bug tracking, it captures the exact steps the user took that gave them the error. Now development can easily walk through the process in a simulated environment to see what might have caused it, they have a documented procedure for what generated the error and they are able to better communicate with the LOB. Also, they can update or attach the simulation\documentation to any defect management software like bugzilla or something similar -all thanks to UPK.

    Read the article

  • Can't double click files to open them in inDesign (CS5)

    - by Matt
    I cannot open a file unless I open inDesign (the program) and then do File-Open If I double click, it starts to open, then just hangs forever. AFTER I close it, and look in the directory where they're saved, I see a (temporary?) "lock" file. Now I can double click the original file and it opens just fine. However, now when I close iD it deletes the file and the whole process starts again... I have tried updating the software, uninstalled COMPLETELY and reinstalled, tried a brand new Win7 install. These files are all saved on a network drive, the computer is a new quad-core Dell with 12GB of RAM and a fresh x64 Win7 install on the SSD. Does not happen with other programs.

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >