Search Results

Search found 17856 results on 715 pages for 'setup py'.

Page 302/715 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • DIY Weather-Aware Umbrella Stand Signals Stormy Weather

    - by Jason Fitzpatrick
    This clever DIY project adds ambient weather notification to your umbrella stand–simply walk by it on your way out the door to get a subtle reminder to take your umbrella. The clever setup involves a hobby board, motion detection, and LEDS to a rather clever end. As you walk by the semi-translucent umbrella stand all of it is mounted in, it lights up to indicate the weather conditions. Blue indicates the forecast for the day shows no sign of rain, green indicates rain, and red indicates thunderstorms. Check out the above video to see the hardware involves and the stand in action; hit up the link below for the full build guide including code. DIY Umbrella Stand Hack with Rain Alert [via Make] How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • dovecot can't compact mail folder /var/mail/username

    - by G. He
    ubuntu 11.10 32bit. Setup a dovecot imap server. Using Thunderbird on a different ubuntu machine (64bit) to access imap server. Everything else is fine, except I can not compact the deleted email in inbox, which is stored at /var/mail/username. Checking mail.log and I see this error message: Apr 3 00:10:11 autumn dovecot: imap(username): Error: file_dotlock_create(/var/mail/username) failed: Permission denied (euid=1000(username) egid=1000(username) missing +w perm: /var/mail, euid is not dir owner) (set mail_privileged_group=mail) what is wrong with the permission? Here are the permissions for the relevant files: $ ls -ld /var/mail drwxrwsr-x 2 mail mail 4096 2012-04-02 23:36 /var/mail $ ls -l /var/mail/username -rw------- 1 username mail 417 2012-04-02 23:36 /var/mail/username Anyone knows what's going on here?

    Read the article

  • Transitioning from Oracle based CMS to MySQL based CMS

    - by KM01
    We're looking at a replacement for our CMS which runs on Oracle. The new CMSes that we've looked at can in theory run on Oracle, but most of the vendor's installs run off of MySQL vendor supports install of their CMS on MySQL, and a "theoretical" install on Oracle the vendor's dev shops use MySQL none of them develop/test against Oracle Our DBA team works exclusively with Oracle, and doesn't have the bandwidth to provide additional support for a highly available and performing MySQL setup. They could in theory go to training and get ramped up, but our time line is also short (surprise!). So ... I guess my question(s) are: If you've seen a situation like this, how have you dealt with it? What tipped the balance either way? What type of effort did it take? If you're to do it over, what would you do differently ... ? Thanks! KM

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Some Adsense domain's ads are causing document.write() statements that remove the html from the page

    - by er1234
    All that is output on the page is the domain name of the advertiser, for example 'www.solar-aid.org'. The rest of the content is stripped, I believe because of a document.write() statement. I'd like to know if this is a common issue or something wrong with our setup. There are three domains causing the issue, which we've blocked from Adsense as a result. solar-aid.org kiva.org grameenfoundation.org Given the type of organizations I think they may be within the default group of 'public service ads' within the Backup Ads setting. If the issue doesn't completely resolve itself soon (one customer of ours complained today, even though I blocked them 5+ days ago), I'll disable public service ads and select the 'fill space with a solid color' option.

    Read the article

  • setting up mod_proxy - plesk, apache, .htacess?

    - by sam
    I want to set up mod_proxy to work so that my blog is running under a subdirectory of my site rather than subdomain so i get the seo backlink benefit. What im looking to do is get my tumblr blog which is running at blog.mysite.com (which is in turn mapped from myblog.tumblr.com) will be running on mysite.com/blog How can i set up mod_proxy to do this, is it just something that i can setup from inside of my .htacess file ? Ive got my site hosted on an apache server, using plesk as a controll panel. I contacted my webhost and they told me mod_rewrite could acheve it, they gave me this but said they wont provide me further support regarding mod_rewrite as its somthing they dont support <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_HOST} ^example.co.uk/blog$ RewriteCond %{REQUEST_URI} !/standard RewriteRule ^(.*)$ http://example.tumblr.com$1 [R] </IfModule> ideally id like to use the mod_proxy method as it recomended from an seo point of view from this article http://www.seomoz.org/blog/what-is-a-reverse-proxy-and-how-can-it-help-my-seo

    Read the article

  • HDFC Bank's Journey to Oracle Private Database Cloud

    - by Nilesh Agrawal
    One of the key takeaways from a recent post by Sushil Kumar is the importance of business initiative that drives the transformational journey from legacy IT to enterprise private cloud. The journey that leads to a agile, self-service and efficient infrastructure with reduced complexity and enables IT to deliver services more closely aligned with business requirements. Nilanjay Bhattacharjee, AVP, IT of HDFC Bank presented a real-world case study based on one such initiative in his Oracle OpenWorld session titled "HDFC BANK Journey into Oracle Database Cloud with EM 12c DBaaS". The case study highlighted in this session is from HDFC Bank’s Lending Business Segment, which comprises roughly 50% of Bank’s top line. Bank’s Lending Business is always under pressure to launch “New Schemes” to compete and stay ahead in this segment and IT has to keep up with this challenging business requirement. Lending related applications are highly dynamic and go through constant changes and every single and minor change in each related application is required to be thoroughly UAT tested certified before they are certified for production rollout. This leads to a constant pressure in IT for rapid provisioning of UAT databases on an ongoing basis to enable faster time to market. Nilanjay joined Sushil Kumar, VP, Product Strategy, Oracle, during the Enterprise Manager general session at Oracle OpenWorld 2012. Let's watch what Nilanjay had to say about their recent Database cloud deployment. “Agility” in launching new business schemes became the key business driver for private database cloud adoption in the Bank. Nilanjay spent an hour discussing it during his session. Let's look at why Database-as-a-Service(DBaaS) model was need of the hour in this case  - Average 3 days to provision UAT Database for Loan Management Application Silo’ed UAT environment with Average 30% utilization Compliance requirement consume UAT testing resources DBA activities leads to $$ paid to SI for provisioning databases manually Overhead in managing configuration drift between production and test environments Rollout impact/delay on new business initiatives The private database cloud implementation progressed through 4 fundamental phases - Standardization, Consolidation, Automation, Optimization of UAT infrastructure. Project scoping was carried out and end users and stakeholders were engaged early on right from planning phase and including all phases of implementation. Standardization and Consolidation phase involved multiple iterations of planning to first standardize on infrastructure, db versions, patch levels, configuration, IT processes etc and with database level consolidation project onto Exadata platform. It was also decided to have existing AIX UAT DB landscape covered and EM 12c DBaaS solution being platform agnostic supported this model well. Automation and Optimization phase provided the necessary Agility, Self-Service and efficiency and this was made possible via EM 12c DBaaS. EM 12c DBaaS Self-Service/SSA Portal was setup with required zones, quotas, service templates, charge plan defined. There were 2 zones implemented - Exadata zone  primarily for UAT and benchmark testing for databases running on Exadata platform and second zone was for AIX setup to cover other databases those running on AIX. Metering and Chargeback/Showback capabilities provided business and IT the framework for cloud optimization and also visibility into cloud usage. More details on UAT cloud implementation, related building blocks and EM 12c DBaaS solution are covered in Nilanjay's OpenWorld session here. Some of the key Benefits achieved from UAT cloud initiative are - New business initiatives can be easily launched due to rapid provisioning of UAT Databases [ ~3 hours ] Drastically cut down $$ on SI for DBA Activities due to Self-Service Effective usage of infrastructure leading to  better ROI Empowering  consumers to provision database using Self-Service Control on project schedule with DB end date aligned to project plan submitted during provisioning Databases provisioned through Self-Service are monitored in EM and auto configured for Alerts and KPI Regulatory requirement of database does not impact existing project in queue This table below shows typical list of activities and tasks involved when a end user requests for a UAT database. EM 12c DBaaS solution helped reduce UAT database provisioning time from roughly 3 days down to 3 hours and this timing also includes provisioning time for database with production scale data (ranging from 250 G to 2 TB of data) - And it's not just about time to provision,  this initiative has enabled an agile, efficient and transparent UAT environment where end users are empowered with real control of cloud resources and IT's role is shifted as enabler of strategic services instead of being administrator of all user requests. The strong collaboration between IT and business community right from planning to implementation to go-live has played the key role in achieving this common goal of enterprise private cloud. Finally, real cloud is here and this cloud is accompanied with rain (business benefits) as well ! For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Managing Multiple dedicated servers centrally using a Web GUI tools?

    - by Sampath
    Application Architecture I am having a single ruby on rails application code running with multiple instances (ie. each client having identical sub domains) running on a multiple dedicated server using phusion passenger + nginx. sub domains setup done using vhost option in nginx passenger module. For Example server 1 serving 1 - 100 client with identical sub domains www.client1.product.com upto www.client100.product.com server 2 serving 101 - 200 client with identical sub domains www.client101.product.com upto www.client200.product.com server 3 serving 201 - 300 client with identical sub domains www.client201.product.com upto www.client300.product.com What my question is i need to centrally manage all my N dedicated servers using an gui tool I am looking for Web GUI tool to manage tasks like 1) backup all mysql databases automatically from all dedicated servers and send it to an some FTP backup drive 2) back files and folders from all dedicated servers and send it to an some FTP backup drive 3) need to manage firewall (CSF http://configserver.com/cp/csf.html) centrally for all dedicated servers 4) look to see server load , bandwidth used in graphical manner for all N no of dedicated servers Note: I am prefer to looking for an open source solution

    Read the article

  • Running TeamCity from Amazon EC2 - Cloud based scalable build and continuous Integration

    Ive been having fun playing with the amazon EC2 cloud service. I set up a server running TeamCity, and an image of a server that just runs a TeamCity agent. I also setup TeamCity  to automatically instantiate agents on EC2 and shut them down based upon availability of free agents. Heres how I did it: The first step was setting up the teamcity server. Create an account on amazon EC2 (BTW, amazons sites works better in IE than it does in chrome.. who knew!?) Open the EC2 dashboard, and...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Running TeamCity from Amazon EC2 - Cloud based scalable build and continuous Integration

    Ive been having fun playing with the amazon EC2 cloud service. I set up a server running TeamCity, and an image of a server that just runs a TeamCity agent. I also setup TeamCity  to automatically instantiate agents on EC2 and shut them down based upon availability of free agents. Heres how I did it: The first step was setting up the teamcity server. Create an account on amazon EC2 (BTW, amazons sites works better in IE than it does in chrome.. who knew!?) Open the EC2 dashboard, and...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why Ubuntu Server asks to insert a CD-ROM when installed from PXE?

    - by MainMa
    I set up a PXE server which hosts both Ubuntu Desktop and Ubuntu Server. Ubuntu Desktop is installed successfully from PXE. Ubuntu Server seems to successfully load vmlinuz and initrd.gz, asks for the language, then the location, then the keyboard layout, and finally complains that it can't mount the CD-ROM: The content of /var/lib/tftpboot/pxelinux.cfg/default is the following: default ubuntu-installer/amd64/boot-screens/vesamenu.c32 menu title Ubuntu setup label ubuntu-14.04-desktop-amd64 menu label ubuntu-14.04-desktop-amd64 kernel ubuntu-14.04-desktop-amd64/vmlinuz.efi append initrd=ubuntu-14.04-desktop-amd64/initrd.lz root=/dev/nfs boot=casper netboot=nfs nfsroot=192.168.1.41:/exports/ubuntu-14.04-desktop-amd64 splash -- label ubuntu-14.04-server-amd64 menu label ubuntu-14.04-server-amd64 kernel ubuntu-14.04-server-amd64/vmlinuz append initrd=ubuntu-14.04-server-amd64/initrd.gz root=/dev/nfs boot=install netboot=nfs nfsroot=192.168.1.41:/exports/ubuntu-14.04-server-amd64 splash -- What explains the fact that it requests the CD-ROM and how to avoid it?

    Read the article

  • Blogging platform for shared hosting?

    - by Ankit
    First of all, this is not for flame wars. So, please do not bash some blogging platform :P I have a shared hosting account. I want to setup a blog on it. Now, the thing is that as it is shared hosting, I do not have as much bandwidth as some expensive hosting. I have tried using Wordpress with the caching plugins configured. It still does not seem to speed it much. Also if I create a simple PHP website, the speed is fine. Can someone suggest me some lightweight platform?

    Read the article

  • Rebooting access point via SSH with pexpect... hangs. Any ideas?

    - by MiniQuark
    When I want to reboot my D-Link DWL-3200-AP access point from my bash shell, I connect to the AP using ssh and I just type reboot in the CLI interface. After about 30 seconds, the AP is rebooted: # ssh [email protected] [email protected]'s password: ******** Welcome to Wireless SSH Console!! ['help' or '?' to see commands] Wireless Driver Rev 4.0.0.167 D-Link Access Point wlan1 -> reboot Sound's great? Well unfortunately the ssh client process never exits, for some reason (maybe the AP kills the ssh server a bit too fast, I don't know). My ssh client process is completely blocked (even if I wait for several minutes, nothing happens). I always have to wait for the AP to reboot, then open another shell, find the ssh client process ID (using ps aux | grep ssh) then kill the ssh process using kill <pid>. That's quite annoying. So I decided to write a python script to reboot the AP. The script connects to the AP's CLI interface via ssh, using python-pexpect, and it tries to launch the "reboot" command. Here's what the script looks like: #!/usr/bin/python # usage: python reboot_ap.py {host} {user} {password} import pexpect import sys import time command = "ssh %(user)s@%(host)s"%{"user":sys.argv[2], "host":sys.argv[1]} session = pexpect.spawn(command, timeout=30) # start ssh process response = session.expect(r"password:") # wait for password prompt session.sendline(sys.argv[3]) # send password session.expect(" -> ") # wait for D-Link CLI prompt session.sendline("reboot") # send the reboot command time.sleep(60) # make sure the reboot has time to actually take place session.close(force=True) # kill the ssh process The script connects properly to the AP (I tried running some other commands than reboot, they work fine), it sends the reboot command, waits for one minute, then kills the ssh process. The problem is: this time, the AP never reboots! I have no idea why. Any solution, anyone?

    Read the article

  • Best way of Javascript web development in Netbeans (Hot deployment)

    - by marcelocbf
    I'm beginning Javascript development and as a beginner in JavaScript I make a lot of mistakes. The way I'm developing is very counter-productive because every mistake I fix I have to shutdown Glassfish, re-build the app and re-deploy it. My app is a Java back-end with REST services and the Html, JavaScript, CSS for the frontend. Everything is packed in a .ear file. As of right now, I'm just working with the frontend but I do have to make this whole process to update the files. My question is ... is there a better way of doing this? Can somebody tell how do you guys work in a similar setup to do the everyday development?

    Read the article

  • Who broke the build?

    - by Martin Hinshelwood
    I recently sent round a list of broken builds at SSW and asked for them to be fixed or deleted if they are not being used. My colleague Peter came back with a couple of questions which I love as it tells me that at least one person reads my email I think first we need to answer a couple of other questions related to builds in general.   Why do we want the build to pass? Any developer can pick up a project and build it Standards can be enforced Constant quality is maintained Problems in code are identified early What could a failed build signify? Developers have not built and tested their code properly before checking in. Something added depends on a local resource that is not under version control or does not exist on the target computer. Developers are not writing tests to cover common problems. There are not enough tests to cover problems. Now we know why, lets answer Peters questions: Where is this list? (can we see it somehow) You can normally only see the builds listed for each project. But, you have a little application called “Build Notifications” on your computer. It is installed when you install Visual Studio 2010. Figure: Staring the build notification application on Windows 7. Once you have it open (it may disappear into your system tray) you should click “Options” and select all the projects you are involved in. This application only lists projects that have builds, so don’t worry if it is not listed. This just means you are about to setup a build, right? I just selected ALL projects that have builds. Figure: All builds are listed here In addition to seeing the list you will also get toast notification of build failure’s. How can we get more info on what broke the build? (who is interesting too, to point the finger but more important is what) The only thing worse than breaking the build, is continuing to develop on a broken build! Figure: I have highlighted the users who either are bad for braking the build, or very bad for not fixing it. To find out what is wrong with a build you need to open the build definition. You can open a web version by double clicking the build in the image above, or you can open it from “Team Explorer”. Just connect to your project and open out the “Builds” tree. Then Open the build by double clicking on it. Figure: Opening a build is easy, but double click it and then open a build run from the list. Figure: Good example, the build and tests have passed Figure: Bad example, there are 133 errors preventing POK from being built on the build server. For identifying failures see: Solution: Getting Silverlight to build on Team Build 2010 RC Solution: Testing Web Services with MSTest on Team Build Finding the problem on a partially succeeded build So, Peter asked about blame, let’s have a look and see: Figure: The build has been broken for so long I have no idea when it was broken, but everyone on this list is to blame (I am there too) The rest of the history is lost in the sands of time, there is no way to tell when the build was originally broken, or by whom, or even if it ever worked in the first place. Build should be protected by the team that uses them and the only way to do that is to have them own them. It is fine for me to go in and setup a build, but the ownership for a build should always reside with the person who broke it last. Conclusion This is an example of a pointless build. Lets be honest, if you have a system like TFS in place and builds are constantly left broken, or not added to projects then your developers don’t yet understand the value. I have found that adding a Gated Check-in helps instil that understanding of value. If you prevent them from checking in without passing that basic quality gate of “your code builds on another computer” then it makes them look more closely at why they can’t check-in. I have had builds fail because one developer had a “d” drive, but the build server did not. That is what they are there to catch.   If you want to know what builds to create and why I wrote a post on “Do you know the minimum builds to create on any branch?”   Technorati Tags: TFS2010,Gated Check-in,Builds,Build Failure,Broken Build

    Read the article

  • Bullet physics debug drawing not working

    - by Krishnabhadra
    Background I am following on from this question, which isn't answered yet. Basically I have a cube and a UVSphere in my scene, with UVSphere on the top of the cube without touching the cube. Both exported from blender. When I run the app The UVSphere does circle around the cube for 3 or 4 times and jump out of the scene. What I actually expect was the sphere to fall on top of the cube. What this question about From the comment to the linked question, I got to know about bullet debug drawing, which helps in debugging by drawing outline of physics bodies which are normally invisible. I did some research on that and came up with the code given below. From whatever I have read, below code should work, but it doesn't. My Code My bullet initialization code. -(void) initializeScene { /*Setup physics world*/ _physicsWorld = [[CC3PhysicsWorld alloc] init]; [_physicsWorld setGravity:0 y:-9.8 z:0]; /*Setting up debug draw*/ MyDebugDraw *draw = new MyDebugDraw; draw->setDebugMode(draw->getDebugMode() | btIDebugDraw::DBG_DrawWireframe ); _physicsWorld._discreteDynamicsWorld->setDebugDrawer(draw); /*Setup camera and lamb*/ ………….. //This simpleCube.pod contains the cube [self addContentFromPODFile: @"simpleCube.pod"]; //This file contains sphere [self addContentFromPODFile: @"SimpleSphere.pod"]; [self createGLBuffers]; CC3MeshNode* cubeNode = (CC3MeshNode*)[self getNodeNamed:@"Cube"]; CC3MeshNode* sphereNode = (CC3MeshNode*)[self getNodeNamed:@"Sphere"]; // both cubeNode and sphereNode are not nil from this point float *cVertexData = (float*)((CC3VertexArrayMesh*)cubeNode.mesh) .vertexLocations.vertices; int cVertexCount = ((CC3VertexArrayMesh*)cubeNode.mesh) .vertexLocations.vertexCount; btTriangleMesh* cTriangleMesh = new btTriangleMesh(); int offset = 0; for (int i = 0; i < (cVertexCount / 3); i++) { unsigned int index1 = offset; unsigned int index2 = offset+6; unsigned int index3 = offset+12; cTriangleMesh->addTriangle( btVector3(cVertexData[index1], cVertexData[index1+1], cVertexData[index1+2]), btVector3(cVertexData[index2], cVertexData[index2+1], cVertexData[index2+2]), btVector3(cVertexData[index3], cVertexData[index3+1], cVertexData[index3+2])); offset += 18; } [self releaseRedundantData]; /*Create a triangle mesh from the vertices*/ btBvhTriangleMeshShape* cTriMeshShape = new btBvhTriangleMeshShape(cTriangleMesh,true); btCollisionShape *sphereShape = new btSphereShape(1); gTriMeshObject = [_physicsWorld createPhysicsObjectTrimesh:cubeNode shape:cTriMeshShape mass:0 restitution:1.0 position:cubeNode.location]; sphereObject = [_physicsWorld createPhysicsObject:sphereNode shape:sphereShape mass:1 restitution:0.1 position:sphereNode.location]; sphereObject.rigidBody->setDamping(0.1,0.8); /*Enable debug drawing*/ _physicsWorld._discreteDynamicsWorld->debugDrawWorld(); } And My btIDebugDraw implementation (MyDebugDraw.h) //MyDebugDraw.h class MyDebugDraw: public btIDebugDraw{ int m_debugMode; public: virtual void drawLine(const btVector3& from,const btVector3& to ,const btVector3& color); virtual void drawContactPoint(const btVector3& PointOnB ,const btVector3& normalOnB,btScalar distance ,int lifeTime,const btVector3& color); virtual void reportErrorWarning(const char* warningString); virtual void draw3dText(const btVector3& location ,const char* textString); virtual void setDebugMode(int debugMode); virtual int getDebugMode() const; }; void MyDebugDraw::drawLine(const btVector3& from,const btVector3& to ,const btVector3& color){ LogInfo(@"Works!!"); glPushMatrix(); glColor4f(color.getX(), color.getY(), color.getZ(), 1.0); const GLfloat line[] = { from.getX()*1, from.getY()*1, from.getZ()*1, //point A to.getX()*1, to.getY()*1,to.getZ()*1 //point B }; glVertexPointer( 3, GL_FLOAT, 0, &line ); glPointSize( 5.0f ); glDrawArrays( GL_POINTS, 0, 2 ); glDrawArrays( GL_LINES, 0, 2 ); glPopMatrix(); } void MyDebugDraw::drawContactPoint(const btVector3 &PointOnB ,const btVector3 &normalOnB, btScalar distance ,int lifeTime, const btVector3 &color){ } void MyDebugDraw::reportErrorWarning(const char *warningString){ } void MyDebugDraw::draw3dText(const btVector3 &location , const char *textString){ } void MyDebugDraw::setDebugMode(int debugMode){ } int MyDebugDraw::getDebugMode() const{ return DBG_DrawWireframe; } My Problem The drawLine method is getting called. I can see the cube and sphere in place. Sphere again does some circling around the cube before jumping off. No debug lines are getting drawn.

    Read the article

  • How to Turn Your Ubuntu Laptop into a Wireless Access Point

    - by Chris Hoffman
    If you have a single wired Internet connection – say, in a hotel room – you can create an ad-hoc wireless network with Ubuntu and share the Internet connection among multiple devices. Ubuntu includes an easy, graphical setup tool. Unfortunately, there are some limitations. Some devices may not support ad-hoc wireless networks and Ubuntu can only create wireless hotspots with weak WEP encryption, not strong WPA encryption. HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • Using portable Mode for true crypt for crypting Portable HDD , USB etc

    - by Gaurav_Java
    I was wondering how to encrypt my external HDD so that my data would be safe AND accessible from any OS platform (Windows and Linux). So I went through many posts and forums and found out the best thing was to use true crypt. I went along with this post and encrypt my USB drive. When inserted that, Windows didn't recognize it. Then I followed another post that said to install truecrypt. The thing is : How can i use traveler disk setup in Ubuntu (Option not found)? Install truecrypt in USB so that i can *mount my USB or HDD on any other OS? If there is no solution for the upper two, then is there any other software which I can use to encrypt an USB device so that it can be accessed from any OS?

    Read the article

  • Renoise and dssi and jack

    - by Bojan
    This may be a little complicated, but, is jack a necessity? I mean, i use renoise, and, since i dont have the need for low latencies, do i really need to use it? My basic setup ( or workflow ) is that i use csound to render stuff to wav, then import it as a sample in renoise. That goes with field recordings, my own samples, etc. So, i dont need ultra low latencies, and i dont need to patch "cords", but i want to use dssi plugins, and dssi-vst. What would be something of a minimum requirements of apps that should work. Can renoise load dssi-vst plugins by itself or do i need to use jack to patch thru or something third, i tried to read lot of articles but i got lost in the different setups...

    Read the article

  • Tips/Process for web-development using Django in a small team

    - by Mridang Agarwalla
    We're developing a web app uing Django and we're a small team of 3-4 programmers — some doing the UI stuff and some doing the Backend stuff. I'd love some tips and suggestions from the people here. This is out current setup: We're using Git as as our SCM tool and following this branching model. We're following the PEP8 for your style guide. Agile is our software development methodology and we're using Jira for that. We're using the Confluence plugin for Jira for documentation and I'm going to be writing a script that also dumps the PyDocs into Confluence. We're using virtualenv for sandboxing We're using zc.buildout for building This is whatever I can think of off the top of my head. Any other suggestions/tips would be welcome. I feel that we have a pretty good set up but I'm also confident that we could do more. Thanks.

    Read the article

  • shader coding: calculate screen coordinates of fragment

    - by Jay
    Good morning, I'm new to shader coding and trying to implement some visual effects code in shaders using billboards. (Yes, I couldn't have picked anything harder to start with, but I'm lucky that way) Setup: I have rendered the full screen z depth to an array of floats in a previous pass. In the fragment shader I need the scene depth where the rendered fragment is displayed (to see if it's occluded). I can use tex2d() to get the depth value if I have the screen coordinates of the point being rendered in the fragment shader. Question: In the fragment shader how do you calculate the screen coordinates of the pixel (in the range 0-1.0)? Is the position passed to the fragment shader a pixel offset? If so, I guess it would be: float2( position.x / screen-width, position.y / screen-height ) Thanks for any help/

    Read the article

  • Falcoda Sub Domains [on hold]

    - by harry_p_6
    I run a website on Falcoda, using home package, but the option to edit sub domains has gone. Please help! I have already setup some using the hosting console previously, but it was updated and now the link is gone. I found that you can use web.hostingconsole.info/sub-domains.cgi , but it says I can have 0 sub domains and I currently have -5 left. On the home package it says you have unlimited. Is this a problem with mine only, or is this up with everybodys?

    Read the article

  • How to Quickly Encrypt Removable Storage Devices with Ubuntu

    - by Chris Hoffman
    Ubuntu can quickly encrypt USB flash drives and external hard drives. You’ll be prompted for your passphrase each time you connect the drive to your computer – your private data will be secure, even if you misplace the drive. Ubuntu’s Disk Utility uses LUKS (Linux Unified Key Setup) encryption, which may not be compatible with other operating systems. However, the drive will be plug-and-play with any Linux system running the GNOME desktop. HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • how to re-use a sprite in cocos2d-x

    - by zinking
    some times it takes time to create the sprite structures in the scene, I might need to setup structures inside this sprite to meet requirement, thus I would hope to reuse such structures with the game again and again. I tried that, remove the child from parent, detach it from parent , clean parent with the sprite. but when I try to add the sprite to another scene, it's just wont pass the assertion that the sprite already have parent did I miss some step ? add an example: I have a sprite A which involves of quite a few steps to construct, so I used it in scene A layer A, and then I want to use it in scene A layer B, scene B layer A1 etc..... generally speaking I don't want to reconstruct the sprte again.

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >