Search Results

Search found 13862 results on 555 pages for 'questions'.

Page 238/555 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • todo manager (gtd): subtasks, php

    - by kusoksna
    Currently I'm using todoist.com as GTD manager. I'm almost satisfied with it. Is there any foss software, that could provide next features? 1. unlimited (at least 5) subtask levels 2. easy way to complete tasks (like in todoist) 3. easy to edit tasks 4. php based (as i want to host it on my server) 5. due dates (including recurring) 6. labels, colors, etc will be good, but not critical PS: i already checked several similar questions, including Which GTD tool/webservice do you recommend? But haven't found suitable software yet.

    Read the article

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with a mixed languages. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options?

    Read the article

  • Slides and Pictures from PowerShell Saturday Columbus 2012

    - by Brian Jackett
    On March 10th, 2012 the first ever PowerShell Saturday conference took place in Columbus, OH and I couldn’t be happier with the outcome.  We had 100 attendees from 10 different states (the biggest surprise to me) come to see 6 speakers present on a variety of PowerShell topics: introduction, WMI, SharePoint, Active Directory, Exchange, 3rd party products and more.      A big thank you also goes out to a number of people. Planning committee Wes Stahler, lead organizer of PowerShell Saturday Columbus, president of Central Ohio PowerShell User Group Ed “Microsoft Scripting Guy” Wilson Teresa “The Scripting Wife” Wilson Ashley McGlone Brian T. Jackett (myself) Speakers Ed Wilson Ashley McGlone James Brundage Trevor Sullivon Daniel Cruz Volunteer Lisa Gardner, fellow Microsoft PFE volunteered her time on a Saturday to assist with smooth operation of the day Facility Coordination Debbie Carrier, facilities coordinator for the Columbus Microsoft Office and helped us out greatly with the venue   Slides and Script Samples    I presented my session on “PowerShell for the SharePoint 2010 Developer”.  Below you can download the slides and script samples.   Photos    I wasn’t able to take took many pictures (only 3) as I was busy doing my presentation, answering questions, and taking care of random items throughout the day.   Pictures on Facebook    click here Pictures on SkyDrive (higher res) PowerShell Saturday Columbus Mar '12 VIEW SLIDE SHOW DOWNLOAD ALL   Conclusion    I’m very happy that this first ever PowerShell Saturday was a success.  My fellow PFE and speaker Ashley McGlone also has a short write-up on his blog about the event (click here).  I have heard rumors that there are other cities starting to plan their own local events.  When I hear more details I’ll spread the word here and on Twitter.         -Frog Out

    Read the article

  • Is there such a thing as "server sliding" (or similar), and if so, what is it?

    - by mahoke
    I am not a network engineer, but rather a translator, so I apologize in advance if this is a rather obvious question to some of you. Normally Google can answer my questions, but in this case I'm coming up blank. If I'm asking this in the wrong forum, please let me know. The text is talking about RADIUS accounting functionality. It says that when there are many (more than 200) authenticated terminals, if a command is issued to forcibly clear the terminals, "sliding to the next RADIUS accounting server may occur" -- the original text literally says "RADIUS accounting server-slide". I think I can basically understand what they are getting at in terms of meaning, but I would like to know whether this is the correct expression to use. I get the impression from my inability to find it simply by Googling that perhaps it's described differently in English.

    Read the article

  • How many of you *really* surf around without JavaScript enabled? [closed]

    - by Stephen
    I've decided to rephrase the question. After some deliberation on Meta, I've realized that my question needs to be a bit more focused. The question: Should we (web developers) continue to spend effort progressively enhancing our web applications with JavaScript, ensuring that features gracefully degrade, thereby ensuring accessibility? Or should we spend that time focused on new features or other areas of development? The subtext of that question would be: How many of our customers/clients/users utilize our websites or applications with JavaScript disabled? Do you have any projects with requirements that specifically demand JavaScript functionality (almost all of mine do), and do those requirements also demand graceful degradation? For the sake of asking this question, I pulled up programmers.stackexchange.com without JavaScript enabled, and I was greeted with this message: "Programmers - Stack Exchange works best with JavaScript enabled". It was difficult to log in, albeit the site seemed to generally work okay. (I wasn't able to vote up any questions.) I think this is a satisfactory approach to development. Imagine the effort involved in making all of the site's features work with plain old HTML and server-side logic. OTOH, I wonder how many users have been alienated by this approach. We've all been trained (at least the good developers among us) to use progressive enhancement and to ensure our web applications' dynamic features degrade gracefully. Is this progressive enhancement just pissing into the wind, or do some of our customers actually utilize certain web services without JavaScript enabled? I mean, like really, not figuratively or presumptuously.

    Read the article

  • Will adding top level directories with similar structure to existing directories change the SEO of my site?

    - by Russell Sims
    I've been pointed this way for SEO related questions and this one has had me pondering for a little while now. I'm recreating a site's structure. The website's content is generated through several feeds and unless I want to place each and every - of the 10,000 odd - venues into their own category manually, I can't avoid categorising each item by using its address. The current the structure looks like this Homepage > region > county > city/town > venue page and the URL looks like domain/region/county/city/venue/ I'm relatively happy to use this structure as it's not too convoluted. However we also promote deals and we also group the venues into their respective franchise, so that leads to URLs such as: domain/groups AND domain/deals My question is: how would the directory structure look with these new additions? Would I have a URL that looks like domain/deals/region/county/city/venue or domain/group/region/county/city/venue and just put a 301 or a canonical link tag on the page to prevent the duplicate pages competing with each other? Am I just worrying about it needlessly and perhaps link straight from domain/deals to the venue page URL domain/region/county/city/venue, this bothers me a bit though as the deals and groups will not be in the breadcrumbs.

    Read the article

  • EC2 instance store cloning or to ebs via guy management console

    - by devnull
    I have found similar questions here but the answer are either outdated or are from the command line. The case is this. I have an EC2 instance using instance store (this was the only AMI available for Debian 6 in Ireland). Now through the AWS GUI I can do a snapshot of the instance volume and/or even create a volume. But an image made from the snapshot doesn't boot. What is the best solution to either clone an EC2 instance that uses instance store OR from the created snapshot of the instance store to launch a new EBS instance (identical clone) FROM the gui aws management console and not command line ? Before turning this down consider that there is not similar question on how to do it via the aws management console. hint can't be done is not an appropriate answer. As you can create a snapshot of the instance store backed instance and/or a volume and create an AMI from that snapshot.

    Read the article

  • Port Forwarding a Specific Port (e.g. 22)

    - by Jerry Blair
    I'm still confused about establishing an SSH connection (port 22) between two computers on different internal networks. For example: I am on my computer with internal IP address IIP-1, connected to my router RT-1. There are 10 IIPs connected to RT-1. I want to establish an SSH connection to IIP-3 which is connected to router RT-2. There are 10 IIPs connected to RT-2. At any time, there can be multiple SSH connections between IIPs on RT-1 and RT-2. Since I only have port 22 available, I don't know which SSH session is talking between which IIPs. I looked at a couple of similar questions but am still unclear on the solution. Thanks much, Jerry

    Read the article

  • How can I improve my error checking and handling?

    - by Google
    Lately I have been struggling to understand what the right amount of checking is and what the proper methods are. I have a few questions regarding this: What is the proper way to check for errors (bad input, bad states, etc)? Is it better to explicitly check for errors, or use functions like asserts which can be optimized out of your final code? I feel like explicitly checking clutters a program with a lot of extra code which shouldn't be executed in most situations anyway-- and not to mention most errors end up with an abort/exit failure. Why clutter a function with explicit checks just to abort? I have looked for asserts versus explicit checking of errors and found little to truly explain when to do either. Most say 'use asserts to check for logic errors and use explicit checks to check for other failures.' This doesn't seem to get us very far though. Would we say this is feasible: Malloc returning null, check explictly API user inserting odd input for functions, use asserts Would this make me any better at error checking? What else can I do? I really want to improve and write better, 'professional' code.

    Read the article

  • Set up ad hoc wireless connection between Windows Vista and Mac OS X

    - by Skarab
    I have the following problem - Windows Vista does not connect to adhoc wireless network created on my Macbook. I have tried to create secured (with 40 bit key) and unsecured network but Windows Vista still has problems to connect. Windows VISTA informs me -- after 5 minutes of attempts - that setting up the connection -- with my adhoc network -- took too much time. My question: do I need to configure some settings on Vista to connect it to my Macbook? Maybe it is a problem with DHCP? Edited: I have tried the other way: http://superuser.com/questions/202890/set-up-an-adhoc-network-in-windows-vista-to-connect-to-and-share-the-internet-con

    Read the article

  • What deployment framework to use?

    - by jeruki
    We are trying to figure out what deployment method/framework to use with a python application, it has a basic wsgi server to make some REST resources available and a set of static web pages with the interface that are served through apache. The situation is as follows: My team works in isolated parts of the program and sometimes together in specific modules, we have different testing servers and one master server, we all work locally, sync the code using git, and then run a bash script that copies the files from the windows machines to the indicated linux server(using ssh) and then restarts the app. After thinking about it this doesn't seem to be the right way to do it, the script overwrites all the files in the server with the local files everytime. We want to be able to work in the same server without the worry of overwriting other people's code and we need to deploy to different servers to avoid restarting the service while others work with it and in the near future we need to deploy to the master or several clones of the master server when the application reaches a more mature state. We found serveral options capistrano, kwate, chef or fortress, even fleet but we wanted to have opinions from people that has used them to be sure it is what we need. So this are the main questions: Are these the kind of programs we should be looking at to achive a safe concurrent deployment process? Which one have you used/recommend and why? do you think it would help in our actual situation? Thank you so much for your feedback and advice on this.

    Read the article

  • Has anyone else read "Programming video games for the Evil Genius"

    - by Martin
    I bought this book called "Programming Video Games for the Evil Genius" by Ian Cinnamon. If there is anyone who has read or is familiar with this book I am wondering if they think it is worth reading. I am interested in making video games. I have already taken intro courses in C++, Java and Python and got through okay. I've been going through this book for about a month now(SLOWLY). All I have to do is type the code exactly in the book, BUT a lot of the code is not clearly explained. I do some research online but I usually still have some trouble answering my questions. Then I found stack overflow. It's been a ton of help. Right now I am trying to make a racing game right out of this book and I got to a point where the author left a bunch of errors in his code. One of the members of this website fixed it up for me, but added some stuff that I'm having trouble understanding. I spend more time trying to figure out the authors errors and fix them or get someone to help me fix them than I actually do learning code. I REALLY want to learn how to do this and I am ready and willing to put in the time, but I'm not sure if my time would be better spent learning from a different source. Are there any veterans out there that are familiar with this book and think it's worth it/not worth it? Should I try to move onto another book? Any advice for a fresh start for someone who wants to learn some video game programming?

    Read the article

  • How do I work around sudo 'segmentation fault' on basic bash commands?

    - by sage
    I am sure the answers are out there, but alas there are too many answers (here and elsewhere) to other questions stopping me from finding them. I just encountered something substantially similar to what is described at the closed SO question, sudo : “segmentation fault” Ubuntu maverick [closed]. My team is using Ubuntu 11.04 on VMWare Workstation 8.0.4. We are doing development using c++, Xenomai, Qt, and Qt Creator. When we simulate our application on the virtual machine, we currently need to launch Qt Creator with sudo. My colleague mentioned today that he has been having issues where his workstation locks up and he needs to restart and that occasionally he has the issue that all sudo bash commands return "segmentation fault". I just ran our application in simulation mode. I was running Qt Creator under sudo and Qt Creator received the signal abort (if I recall). Afterward, every command executed with sudo from sudo qtcreator to sudo ls resulted in the message Segmentation fault. I clicked on the power widget to see if I could log out, but the system shut down straightaway without prompting. My understanding is that we run sudo because of a permissions issue with Xenomai and the VM as currently configured, but my colleague has a workaround for this. I expect that not running Qt Creator under sudo -- something that has always made me nervous -- will help contain this issue, but I find it troubling that this could happen and manifest as it does. Does anyone know what is happening? Any recommendations on how to work around this issue? This is happening often to I am trying tolobby for VM changes to be able to run the process without sudo.

    Read the article

  • What is the best way to code the XNA Game Server for FPS game?

    - by AgentFire
    I'm writing a FPS XNA game. It gonna be multiplayer so I came up with following: I'm making two different assemblies — one for the game logic and the second for drawing it and the game irrelevant stuff (like rocket trails). The type of the connection is client-server (not peer-to-peer), so every client at first connects to the server and then the game begins. I'm completly decided to use XNA.Framework.Game class for the clients to run their game in window (or fullscreen) and the GameComponent/DrawableGameComponent classes to store the game objects and update&draw them on each frame. Next, I want to get the answer to the question: What should I do on the server side? I got few options: Create my own Game class on the server, which will process all the game logic (only, no graphics). The reason why I am not using the standart Game class is when I call Game.Run() the white window appears and I cant figure out how to get rid of it. Use somehow the original XNA's Game class, which is already has the GameComponent collection and Update event (60 times per second, just what I need). UPDATE: I got more questions: First, what socket mode should I use? TCP or UDP? And how to actually let the client know that this packet is meant to be processed after that one? Second, if I is going to use exacly GameComponent class for the game objects which is stored and process on the server, how to make them to be drawn on the client? Inherit them (while they are combined to an assembly)? Something else?

    Read the article

  • Ubuntu installation does not recognize previous partitions

    - by Hawkcannon
    I have been attempting to install Ubuntu (10.04, Lucid Lynx) on my computer. I wasn't ready to take the pure-Linux plunge yet, so I reserved a partition on which I would install Ubuntu. I ran the installer and answered the 'minor' questions (keyboard layout, time zone, etc.), but had trouble when I reached the partitioning. I have several partitions, but Ubuntu only saw one of them, which was not the ext3 partition that I had set up. I tried deleting the partition in hope that the installer would find and utilize the empty space, but it only saw the original partition. I do not have an external hard drive to use, and I cannot clear any existing partitions. Am I running the installer incorrectly, or is there a more serious problem?

    Read the article

  • Redirecting subsite on same domain to other IIS using HTTPS

    - by Alberto
    I've seen many similar questions (and answers) on this subject, but none seem to be on exactly the same situation I am facing. Which is weird since I don't think it is that special, so forgive me if I haven't searched enough. Anyway. I have two websites which are on two IIS7, one facing WAN and one in the LAN. The WAN facing is already HTTPS-only. I want to add the second website, but on the same HTTPS domain and SSL certificate, so that it becomes a subsite like: https://www.domain.com/subsite How can I do a redirect or rewrite on the first IIS to the second one to make this work? I don't think there is a standard IIS feature that can do this. ISA server is not an option currently. But maybe another extension to IIS exists? Done this numerous times on Apache, and am about to ditch IIS for Apache.

    Read the article

  • libgdx ActorGestureListener.pan() parameters not moving actor in smooth line

    - by Roar Skullestad
    I override the pan method in ActorGestureListener to implement dragging actors in libgdx (scene2d). When I move individual pieces on a board they move smoothly, but when moving the whole board, the x and y coordinates that is sent to pan is "jumping", and in an increasingly amount the longer it is dragged. These are an example of the deltaY coordinates sent to pan when dragging smoothly downwards: 1.1156368 -0.13125038 -1.0500145 0.98439217 -1.0500202 0.91877174 -0.984396 0.9187679 -0.98439026 0.9187641 -0.13125038 This is how I move the camera: public void pan (InputEvent event, float x, float y, float deltaX, float deltaY) { cam.translate(-deltaX, -deltaY); I have been using both the delta values sent to pan and the real position values, but similar results. And since it is the coordinates that are wrong, it doesn't matter whether I move the board itself or the camera. What could the cause be for this and what is the solution? When I move camera only half the delta-values, it moves smoothly but only at half the speed of the mouse pointer: cam.translate(-deltaX / 2, -deltaY / 2); It seems like the moving of camera or board affects the mouse input coordinates. How can I drag at "mouse speed" and still get smooth movements? (This question was also posted on stackoverflow: http://stackoverflow.com/questions/20693020/libgdx-actorgesturelistener-pan-parameters-not-moving-actor-in-smooth-line)

    Read the article

  • Ubuntu installation does not recognize previous partitions

    - by Hawkcannon
    I have been attempting to install Ubuntu (10.04, Lucid Lynx) on my computer. I wasn't ready to take the pure-Linux plunge yet, so I reserved a partition on which I would install Ubuntu. I ran the installer and answered the 'minor' questions (keyboard layout, time zone, etc.), but had trouble when I reached the partitioning. I have several partitions, but Ubuntu only saw one of them, which was not the ext3 partition that I had set up. I tried deleting the partition in hope that the installer would find and utilize the empty space, but it only saw the original partition. I do not have an external hard drive to use, and I cannot clear any existing partitions. Am I running the installer incorrectly, or is there a more serious problem?

    Read the article

  • WebClient/Publisher Temporary Files in CleanMgr(Disk Cleanup)

    - by MsLis
    When I run Disk Cleanup (cleanmgr.exe) on my work PC (running WinXP), it claims to see 778,520 K of WebClient/Publisher Temporary Files Although I do check that field, the number never drops. I've manually scanned through all possible temp folders on the drive and don't see anyplace hiding 760 MB of temp files, which makes me wonder whether those files merely exist in some list (registry or INI) somewhere but don't actually exist on the drive. So, my questions are: 1. Where on the drive might those WebClient/Publisher Temporary Files be 2. Where does cleanmgr.exe look to determine how much disk space is used by WebClient/Publisher Temporary Files Thanks in advance.

    Read the article

  • ArchBeat Link-o-Rama for August 2, 2013

    - by OTN ArchBeat
    Podcast: Data Warehousing and Oracle Data Integrator - Part 2 Part to of the discussion about Data Warehousing and Oracle Data Integrator focuses on a discussion of how data warehousing is changing and the forces driving that change. Panelists for this discussion are Uli Bethke, Oracle ACE Director Cameron Lackpour, Oracle ACE Director (and guest producer) Gurcan Orhan, and Michael Rainey. Case Management In-Depth: Cases & Case Activities Part 1 – Acivity Scope | Mark Foster FMW solution architect Mark Foster kicks off a new series with a look at the decisions made on the scope of BPM process case activities. Video: Quick Intro to WebLogic Maven Plugin 12.1.2 | Mark Nelson This YouTube video by FMW solution architect Mark Nelson offers a quick introduction to the basics of installing and using the new Oracle WebLogic 12.1.2 Maven Plugin. Running the Managed Coherence Servers Example in WebLogic Server 12c | Tim Middleton FMW solution architect Tim Middleton shares the technical details on the new Managed Coherence Servers feature and outlines how you can run the sample application available with a WebLogic Server 12.1.2 install. What’s wrong with how we develop and deliver SOA Applications today? | Mark Nelson "When we arrive at the go-live day, we have a lot of fear and uncertainty," says solution architect Mark Nelson of the typical SOA practice. "We have no idea if the system is going to work in production. We have never tested it under a production-like load, and we have not really tested it for performance, longevity, etc." OTN Latin America Tour 2013 | Kai Yu Oracle ACE Director Kai Yu shares the session abstracts from his participation in the 2013 Oracle Technology Network Latin America conference tour, which made its way through OUG conferences in Ecuador, Guatemala, Panama, and Costa Rica. Webcast: Latest Security Innovations in Oracle Database 12c Oracle Database 12c includes more new security capabilities than any other release in Oracle history! In this webcast Roxana Bradescu (Director, Oracle Database Security Product Management) will discuss these capabilities and answer your questions. (Registration required.) Thought for the Day "The main goal in life career-wise should always be to try to get paid to simply be yourself." — Kevin Smith (Born August 2, 1970) Source: brainyquote.com

    Read the article

  • Non-Apple RAID card for Mac PRO (TOWER)

    - by Arthor
    I have the following: MAC PRO (Model Number: A1186) (PCIe - SLOTS) At present I am using the software RAID however I wish to move to the hardware raid because of the following: Performance (4 x 300gb SATA II in RAID 5) Redundancy (Raid 5, 1 drive can fail and system will be online) I do not wish to use the Apple RAID card (very expensive), I would like to use an aftermarket one which is cheaper. Questions: Does anyone have a WORKING aftermarket RAID card working in their MAC PRO (TOWER)? -(Have done some research, ROCKETRAID, need confirmation) If so to the above, does it work from boot? Thanks

    Read the article

  • Could not find rake-10.1.0 in any of the sources

    - by spuder
    I've got a ruby on rails application (gitlab) which is installed via puppet. Everything on the test system runs fine, but production generates an error about rake Running /home/git/gitlab-shell/bin/check Could not find rake-10.1.0 in any of the sources Run bundle install to install missing gems. Here is the full rake check: root@gitlab:/home/git# sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.1 ? ... OK (1.7.1) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... Could not find rake-10.1.0 in any of the sources Run `bundle install` to install missing gems. gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... Spencer Owen / bar ... yes Projects have satellites? ... Spencer Owen / bar ... can't create, repository is empty Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.4) Checking GitLab ... Finished The step 'gitlab-shell check' effectively runs the following command. If I run that command manually, everything passes. root@gitlab:/home/git/gitlab# sudo -u git -H /home/git/gitlab-shell/bin/check Check GitLab API access: OK Check directories and files: /home/git/repositories: OK /home/git/.ssh/authorized_keys: OK I have verified that rake is in fact installed root@gitlab:/home/git/gitlab# gem install rake -v 10.1.0 root@gitlab:/home/git/gitlab# bundle install root@gitlab:/home/git/gitlab# sudo -u git -H gem install rake -v 10.1.0 root@gitlab:/home/git/gitlab# sudo -u git -H bundle install Ruby is installed with update alternatives root@gitlab:/home/git/gitlab# sudo -u git -H ruby --version ruby 1.9.3p0 (2011-10-30 revision 33570) [x86_64-linux] root@gitlab:/home/git/gitlab# sudo -u git -H ls -l `which ruby` lrwxrwxrwx 1 root root 22 Oct 8 20:26 /usr/bin/ruby -> /etc/alternatives/ruby root@gitlab:/home/git/gitlab# sudo -u git -H gem --version 2.1.10 root@gitlab:/home/git/gitlab# sudo -u git -H ls -l `which gem` lrwxrwxrwx 1 root root 21 Oct 10 20:50 /usr/bin/gem -> /etc/alternatives/gem I've tried the solution mentioned below, to allow shared gems http://stackoverflow.com/questions/19284914/bundle-exec-fails-with-could-not-find-rake-10-1-0-in-any-of-the-sources http://stackoverflow.com/questions/18978002/could-not-find-rake-with-bundle-exec root@gitlab:/home/git/gitlab# cat /home/git/gitlab/.bundle/config --- BUNDLE_FROZEN: '1' BUNDLE_PATH: vendor/bundle BUNDLE_WITHOUT: development:test:postgres BUNDLE_DISABLE_SHARED_GEMS: '1' I've exhausted google, so I'm hoping for someone more familiar with ruby to offer any ideas how to resolve the error. Could not find rake-10.1.0 in any of the sources

    Read the article

  • Good Laptop .NET Developer VM Setup

    - by Steve Brouillard
    I was torn between putting this question on this site or SuperUsers. I've tried to do a good bit of searching on this, and while I find plenty of info on why to go with a VM or not, there isn't much practical advise on HOW to best set things up. Here's what I currently HAVE: HP EliteBook 1540, quad-core, 8GB memory, 500GB 7200 RPM HD, eSATA port. Descent machine. Should work just fine. Windows 7 64-bit Host OS. This also acts as my day-to-day basic stuff (email, Word Docs, etc...) OS. VMWare Desktop Windows 7 64-bit Guest OS with all my .NET dev tools, frameworks, etc loaded on it. It's configured to use 2 cores and up to 6GB of memory. I figure that the dev env will need more than email, word, etc... So, this seemed like a good option to me, but I find with the VM running, things tend to slow down all around on both the host and guest OS. Memory and CPU utilization don't seem to be an issue, but I/O does. I tried running the VM on an external eSATA drive, figuring that the extra channel might pick up the slack. Things only got worse (could be my eSATA enclosure). So, for all of that I have basically two questions in one. Has anyone used this sort of setup and are there any gotchas either around the VMWare configuration or anything else I may have missed here that you can point me to? Is there another option that might work better? For example, I've considered trying a lighter weight Host OS and run both of my environments as VMs? I tried this with Server 2008 Hyper-V, but I lose too much laptop functionality going this route, so I never completed setup. I'm not averse to Linux as a host OS, though I'm no Linux expert. If I'm missing any critical info, feel free to ask. Thanks in advance for your help. Steve

    Read the article

  • VirtualBox: Why are some USB devices disabled?

    - by torbengb
    Overview: My Host OS is Ubuntu 10.10 and guest OS is WinXP on the VirtualBox version downloaded from Oracle including "VirtualBox 4.0 Oracle VM VirtualBox Extension Pack" so that USB passthrough works. This works in general (I was able to back up my iPhone to iTunes in the guest OS), but some devices aren't available even though they're ptrovided in the VirtualBox settings. Specifics: In the VirtualBox settings for the guest OS, there's the part where you can select which of your USB devices should be visible to the guest OS. I've selected several devices including the iPhone. So far so good. Then an iOS upgrade came along; my iPhone is now in DFU mode (or recovery mode?) and represents itself not as "iPhone" but as "iPhone (DFU mode)". I have now also added this device to the list of USB devices that the guest OS should see -- but it doesn't see this device. Questions: Am I right in expecting that the guest OS ought to see the DFU device when I add it in the VirtualBox settings? What steps do I need to take so that the guest OS will really see the DFU device?

    Read the article

  • OpenAFS on Fedora/CentOS

    - by Michael Pliskin
    I am trying to see if OpenAFS fits my needs as a distributed filesystem and is a bit stuck. There are docs but they're all quite hard to understand, so asking for some expert advice here. My questions: which version to install? I need windows client support so I need 1.5 - right? But it is not stable.. Or is it? And don't see any pre-built rpms for it, so compiling from sources? tried to compile and it worked but it created a non-"mp" kernel module while my kernel needs an mp one - how to workaround that? do I really need a new fresh partition to start with or I can re-use an existing one and just make it available via afp? any nice HOWTOs around?

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >