Search Results

Search found 491 results on 20 pages for 'staging'.

Page 11/20 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • SQL SERVER – Color Coding SQL Server Management Studio Status Bar – SQL in Sixty Seconds #023 – Video

    - by pinaldave
    I often see developers executing the unplanned code on production server when they actually want to execute on the development server. Developers and DBAs get confused because when they use SQL Server Management Studio (SSMS) they forget to pay attention to the server they are connecting. It is very easy to fix this problem. You can select different color for a different server. Once you have different color for different server in the status bar, it will be easier for developer easily notice the server against which they are about to execute the script. Personally when I work on SQL Server development, here is the color code, which I follow. I keep Green for my development server, blue for my staging server and red for my production server. Honestly color coding does not signify much but different color for different server is the key here. More Tips on SSMS in SQL in Sixty Seconds: Generate Script for Schema and Data in SQL Server – SQL in Sixty Seconds #021  Remove Debug Button in SQL Server Management Studio – SQL in Sixty Seconds #020  Three Tricks to Comment T-SQL in SQL Server Management Studio – SQL in Sixty Seconds #019  Importing CSV into SQL Server – SQL in Sixty Seconds #018   Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • How can I add usbip modules on Redhat 6 kernel?

    - by Gk.
    I have RHEL 6 with # uname -r 2.6.32-131.0.15.el6.x86_64 I'm trying to build usbip modules on staging driver. Everything is OK. I have all needed *.ko files. But I cannot add those modules on running kernel. # pwd /lib/modules/2.6.32-131.0.15.el6.x86_64 # ls | grep ko usbip_common_mod.ko usbip.ko vhci-hcd.ko # modprobe usbip FATAL: Error inserting usbip (/lib/modules/2.6.32-131.0.15.el6.x86_64/usbip.ko): Required key not available # insmod usbip.ko insmod: error inserting 'usbip.ko': -1 Required key not available How can I add it? Do I need to rebuild whole kernel? TIA, giobuon

    Read the article

  • SharePoint and COMException (0x80004005): Cannot complete this action

    - by Damon
    I ran into a small issue today working on a deployment.  We were moving a custom ASP.NET control from my development environment into a SharePoint layout page on a staging environment .  I was expecting some minor issues to arise since I had developed the control in an ASP.NET website project, but after getting everything moved over we got an obscure COMException error the that looked like this: Cannot complete this action. Please try again. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Runtime.InteropServices.COMException: Cannot complete this action. [COMException (0x80004005): Cannot complete this action. .Lengthy stack trace goes here. Everything in the custom control was built using managed code, so we weren't sure why a COMException would suddenly appear. The control made use of an ITemplate to define its UI, so there was a lot of markup and binding code inside the template. As such, we started taking chunks of the template out of the layout page and eventually the error went away.  It was being caused by a section of code where we were calling a custom utility method inside some binding code: <%# WebUtility.FormatDecimal(.) %> Solution: It turns out that we were missing an Assembly and Import directive at the top of the page to let the page know where to find this method.  After adding these to the page, the error went away and everything worked great.  So a COMException (0x80004005) Cannot complete this action error is just SharePoint's friendly way of letting you know you're missing an assembly or imports reference.

    Read the article

  • SQL Server 2008 R2 SQL Server has encountered x occurrence(s) of I/O requests taking longer than 15 seconds to complete on file

    - by Natalia
    When I alter or create a stored procedure directly on production or QA database, after a few seconds I start experience timeouts and application becomes unavailable. Log files shows this error: SQL Server has encountered 3 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\QA_Database.ldf] in database [QA_Database] (9). The OS file handle is 0x0000000000000568. The offset of the latest long I/O is: 0x0000002821a200 We have SQL Server 2008 R2 installed, including latest Service Pack. Production and staging environments are completely separated. I tried to reproduce it on QA, but to no avail. I have no clue what it could be. Appreciate your help.

    Read the article

  • How to monitor CPU usage and performance on a Hyper-V server with several VM's

    - by Bjørn
    Hello, I have a server that is running Windows 2008 64 bit Hyper-V, with 8 gigs of RAM and Intel Xeon X3440 @ 2.53 Ghz, which gives me 8 logical cores in the performance monitor on the host system. I have set up three Virtual Machines, all running Windows 2008 32 bit. Build server, running Team City Staging server SQL Server, running SQL Server 2005 I have some troubles with the setup in that the host monitor remains responsive at all times, even though the VM's are seemingly working at 100% cpu and are very sluggish and unresponsive. (I have asked a separate question about that.) So the question here is: What is the best way to monitor how the physical CPU's are actually utilized? The reason I am asking is that I am being told that i cannot reliably use the task manager to monitor CPU usage in a VM.

    Read the article

  • Iterative Conversion

    - by stuart ramage
    Question Received: I am toying with the idea of migrating the current information first and the remainder of the history at a later date. I have heard that the conversion tool copes with this, but haven't found any information on how it does. Answer: The Toolkit will support iterative conversions as long as the original master data key tables (the CK_* tables) are not cleared down from Staging (the already converted Transactional Data would need to be cleared down) and the Production instance being migrated into is actually Production (we have migrated into a pre-prod instance in the past and then unloaded this and loaded it into the real PROD instance, but this will not work for your situation. You need to be migrating directly into your intended environment). In this case the migration tool will still know all about the original keys and the generated keys for the primary objects (Account, SA, etc.) and as such it will be able to link the data converted as part of a second pass onto these entities. It should be noted that this may result in the original opening balances potentially being displayed with an incorrect value (if we are talking about Financial Transactions) and also that care will have to be taken to ensure that all related objects are aligned (eg. A Bill must have a set to bill segments, meter reads and a financial transactions, and these entities cannot exist independantly). It should also be noted that subsequent runs of the conversion tool would need to be 'trimmed' to ensure that they are only doing work on the objects affected. You would not want to revalidate and migrate all Person, Account, SA, SA/SP, SP and Premise details since this information has already been processed, but you would definitely want to run the affected transactional record validation and keygen processes. There is no real "hard-and-fast" rule around this processing since is it specific to each implmentations needs, but the majority of the effort required should be detailed in the Conversion Tool section of the online help (under Adminstration/ The Conversion Tool). The major rule is to ensure that you only run the steps and validation/keygen steps that you need and do not do a complete rerun for your subsequent conversion.

    Read the article

  • CPU's on Hyper-V host system is just idling, even though VM's are at full throttle

    - by Bjørn
    Hello, I have a server that is running Windows 2008 64 bit Hyper-V, with 8 gigs of RAM and Intel Xeon X3440 @ 2.53 Ghz, which gives me 8 logical cores in the performance monitor on the host system. I have set up three Virtual Machines, all running Windows 2008 32 bit. Build server, running Team City Staging server SQL Server, running SQL Server 2005 These three machines are running very sluggishly, they are at 100% cpu even though the host system is barely using any cpu at all, typically below 10% total. Could anyone please give some tips as to the best setup for CPU allocating? Should I have set each server to have two cores, or should I increase this number above the total number of cores on the host? What is a good number to set on the Virtual Machine Reserve and Virtual Machine Limit? Is 8 gigs of physical RAM insufficient for 3 VM's? Thanks for reading. :)

    Read the article

  • Symbolic links and 7zip

    - by Fire Lancer
    I'm trying to compress a folder into a .7z archive. This folder contains symbolic links to some other stuff outside the folder (both directories and files). Apparently 7zip just archives the link it's self which is not what I intended. Is there a way to tell 7zip that I want it to archive the stuff that it links too, not the link its self (so if there is a symlink name "foo" which points to "C:\stuff\foo" I want it to include the "C:\stuff\foo" stuff in the archive in place of foo, not a "0 byte" symlink... Failing that is there any reasonable around it, apart from adding the files and folders in question? Including the stuff through the symlink there's like 10 000 files, the large proportion of which are via symlinks so adding them all individually would take hours... I'm thinking mayby a program that creates a staging folder with the real files in it then passes that to 7zip, or just an archiver that does handle them better?

    Read the article

  • PHP remote development workflow: git, symfony and hudson

    - by user2022
    I'm looking to develop a website and all the work will be done remotely (no local dev server). The reason for this is that my shared hosting company a2hosting has a specific configuration (symfony,mysql,git) that I don't want to spend time duplicating when I can just ssh and develop remotely or through netbeans remote editing features. My question is how can I use git to separate my site into three areas: live, staging and dev. Here's my initial thought: public_html (live site and git repo) testing: a mirror of the site used for visual tests (full git repo) dev/ticket# : git branches of public_html used for features and bug fixes (full git repo) Version Control with git: Initial setup: cd public_html git init git add * git commit -m ‘initial commit of the site’ cd .. git clone public_html testing mkdir dev Development: cd /dev git clone ../testing ticket# all work is done in ./dev/ticket#, then visit www.domain.com/dev/ticket# to visually test make granular commits as necessary until dev is done git push origin master:ticket# if the above fails: merge latest testing state into current dev work: git merge origin/master then try the push again mark ticket# as ready for integration integration and deployment process: cd ../../testing git merge ticket# -m "integration test for ticket# --no-ff (check for conflicts ) run hudson tests visit www.domain.com/testing for visual test if all tests pass: if this ticket marks the end of a big dev sprint: make a snapshot with git tag git push --tags origin else git push origin cd ../public_html git checkout -f (live site should have the latest dev from ticket#) else: revert the merge: git checkout master~1; git commit -m "reverting ticket#" update ticket# that testing failed with the failure details Snapshots: Each major deployment sprint should have a standard name and should be tracked. Method: git tag Naming convention: TBD Reverting site to previous state If something goes wrong, then revert to previous snapshot and debug the issue in dev with a new ticket#. Once the bug is fixed, follow the deployment process again. My questions: Does this workflow make sense, if not, any recommendations Is my approach for reverting correct or is there a better way to say 'revert to before x commit'

    Read the article

  • __modver_version_show undefined error when building linux kernel 3.0.4 version

    - by Jie Liu
    I tried to build the linux kernel 3.0.4 on ubuntu 11.10 in virtualbox. Here are my steps: Download the source code tar xjvf linux-source-3.0.0.tar.bz2 cd linux-source-3.0.0 make menuconfig, changed nothing but used the default config and save to .config make Actually I think it should be 3.0.4 because from the Makefile I could see VERSION = 3 PATCHLEVEL = 0 SUBLEVEL = 4 EXTRAVERSION = Then at stage 2 which is to make modules, an error happened: ERROR: "__modver_version_show" [drivers/staging/rts5139/rts5139.ko] undefined! make[1]: *** [__modpost] Error 1 make: *** [modules] Error 2 Perhaps because 3.0.4 is a new release so that I can not find any same problem asked nor any solution to it.

    Read the article

  • Nautilus header bar missing -- Ubuntu Gnome 13.10 (Gnome 3.10)

    - by user75252
    So, I recently did a fresh install of Ubuntu GNOME 13.10, added the gnome3-team/gnome3-next and gnome3-team/gnome3-staging PPA's, and upgraded to Gnome 3.10. (Also using a dual-monitor system, 1920 x 1080, Nvidia-319 driver.) Everything was running fine after the updates (including Nautilus, or "Files"), but when I opened Nautilus, at some point, the header bar was gone, and it got stuck in full-screen mode. The header is there for every other application, though. I can't resize Nautilus, I can't move it with the Alt+F7 hotkey. I can, however, make the sidebar disappear with F9 and make the program close with Alt+F4. I can also bring up the window menu with Alt+space, but the options to "resize" and "move" are greyed out, and the "Move Titlebar Onscreen" does nothing when clicked. Attempted solutions: I uninstalled, ran apt-get autoremove clean autoclean, and re-installed Nautilus, including any subsequent applications that were removed -- no fix. I installed and tried replacing the titlebar theme with Ambiance via Gnome Tweak Tool to at least restore the header/title bar -- no fix. I created a new user, logged into that, and opened Nautilus. It DID open up in the windowed mode with the header bar, but then, without my involvement, went to full-screen without the header bar. Same problem. Running "sudo nautilus" from the terminal does open it (full-screen, without header), but gives this error: (nautilus:7531): Gtk-WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files Here's a screenshot of the complete Nautilus dialog box:

    Read the article

  • Setting up nginx on Ubuntu?

    - by Industrial
    Hi everyone, I've just setup a VPS running Ubuntu server 10.10 as a test environment to run with nginx. This far i've ran apt-get install nginx php5 php5-cgi and accessed the IP of the VPS with a browser which outputs It works, so it should be ready to run. Never having worked with nginx in the past, I have no idea on what to do next. How should I config my nginx install to run properly as a staging server in my LAN? Apparently, there's multiple configs for nginx including sites-default and nginx-default making me really confused.

    Read the article

  • Starting/Stopping IBM WebSphere Application Server (WAS) 7 from the Command Line

    - by Christopher Parker
    I've written a script to automate the process of starting, stopping, and restarting WAS7 from the command line. Nothing starts automatically on one of our staging servers, so I have to start everything: deployment manager, node agent, app server, and Web server. The script I wrote seems to work pretty well. A coworker of mine recommended that I structure my commands differently. I'm wondering if there's a good, valid reason for doing so. First, my variables: WAS_HOME="/opt/IBM/WebSphere/AppServer" WAS_PROFILE_NAME="AppSrv01" WAS_APP_SERVER="server1" WAS_WEB_SERVER="webserver1" How I had the start commands: "${WAS_HOME}/bin/startManager.sh" "${WAS_HOME}/bin/startNode.sh" -profileName $WAS_PROFILE_NAME "${WAS_HOME}/bin/startServer.sh" -profileName $WAS_PROFILE_NAME $WAS_APP_SERVER "${WAS_HOME}/bin/startServer.sh" -profileName $WAS_PROFILE_NAME $WAS_WEB_SERVER I was told that I should do it like this, instead: WAS_DMGR="Dmgr01" # Added variable "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startNode.sh" "${WAS_HOME}/profiles/${WAS_DMGR}/bin/startManager.sh" "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startServer.sh" $WAS_APP_SERVER "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startServer.sh" $WAS_WEB_SERVER How is the second way of starting up everything for WebSphere any better or more correct than the first, original, way?

    Read the article

  • Ubuntu 12.04 taking too much time to boot

    - by adarshdinesh
    Ubuntu 12.04 is taking much time for booting, Here is the system kernel message while booting .It is showing that some anacron was killed ,why ? and how to fix the problem ? [ 2.241047] scsi6 : usb-storage 2-1.6:1.0 [ 2.241501] usbcore: registered new interface driver usb-storage [ 2.241895] USB Mass Storage support registered. [ 3.240670] scsi 6:0:0:0: Direct-Access Multiple Card Reader 1.00 PQ: 0 ANSI: 0 [ 3.241791] sd 6:0:0:0: Attached scsi generic sg2 type 0 [ 3.243083] sd 6:0:0:0: [sdb] Attached SCSI removable disk [ 12.568641] Adding 4037904k swap on /dev/sda3. Priority:-1 extents:1 across:4037904k [ 12.615014] udevd[462]: starting version 175 [ 12.651334] mei: module is from the staging directory, the quality is unknown, you have been warned. [ 12.655283] [drm] Initialized drm 1.1.0 20060810 ................... [ 14.118369] init: alsa-restore main process (982) terminated with status 19 [ 14.252595] init: anacron main process (1033) killed by TERM signal [ 14.285763] HDMI status: Codec=3 Pin=5 Presence_Detect=0 ELD_Valid=0 [ 14.285841] input: HDA Intel PCH HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 [ 14.285925] input: HDA Intel PCH Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input9 [ 14.285991] input: HDA Intel PCH Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input10 [ 14.615073] init: plymouth-stop pre-start process (1222) terminated with status 1 [ 16.447287] wlan0: authenticate with c0:8a:de:7c:60:e8 (try 1) [ 16.448858] wlan0: authenticated [ 16.453405] wlan0: associate with c0:8a:de:7c:60:e8 (try 1) [ 16.456392] wlan0: RX AssocResp from c0:8a:de:7c:60:e8 (capab=0x431 status=0 aid=2) [ 16.456398] wlan0: associated [ 16.457014] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 16.457017] ieee80211 phy0: brcmsmac: brcms_ops_bss_info_changed: associated [ 16.457019] ieee80211 phy0: changing basic rates failed: -22 [ 16.457021] ieee80211 phy0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 0 (implement) [ 16.457226] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 16.654196] ieee80211 phy0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) [ 17.823565] ieee80211 phy0: wl0: brcms_c_d11hdrs_mac80211: txop exceeded phylen 180/256 dur 1946/1504 [ 18.220865] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 26.881422] wlan0: no IPv6 routers present [ 68.228293] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 73.240133] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 76.574490] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 102.180006] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 103.100984] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 124.171624] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement)

    Read the article

  • How to keep groups when pulling with git

    - by mimrock
    I have a staging site that is a working directory of a git repository. How to set up git to let a developer pull out a branch or release without changing the group of the modified files? An example. Let's say I have two developers, robin and david. They are both in git-users group, so initially they can both have write permissions on site.php. -rw-rw-r-- 1 robin git-users 46068 Nov 16 12:12 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git After robin-server1$ git pull origin master: -rw-rw-r-- 1 robin robin 46068 Nov 16 12:35 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git And david do not have write permissions on site.php, because the group changed from 'git-users' to 'robin'. From now on, david will get a permission denied, when he tries to pull to this repository.

    Read the article

  • meld on OS X 10.7 doesn't work?

    - by klm123
    I'm installing meld on Mac OS 10.7 using port. It has downloaded all dependencies and told that everything is ok: Staging meld into destroot Installing meld @1.5.3_0 Activating meld @1.5.3_0 Cleaning meld Updating database of binaries: 100.0% Scanning binaries for linking errors: 100.0% No broken files found. but when I run: [18:28:24]~$ meld Traceback (most recent call last): File "/opt/local/bin/meld", line 75, in <module> locale.setlocale(locale.LC_ALL,'') File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/locale.py", line 539, in setlocale return _setlocale(category, locale) locale.Error: unsupported locale setting what is the problem and how to deal with it?

    Read the article

  • What is the harm in giving developers read access to application server application event logs?

    - by Jim Anderson
    I am a developer working on an ASP.NET application. The application writes logging messages to the Windows event log - a custom application log just for this application. However, I do not have any access to testing or staging web/application servers. I thought an admin could just give me read access to this event log to help in debugging problems (currently a service that is working in dev is not working in test environment and I have no idea why) but that is against my client's (I'm a consultant) policy. I feel silly to keep asking an admin to look at the event log for me. What is the harm in giving developers read access to application server application event logs? Is there a different method of application logging that sysadmins prefer programmers use? Surely, admins don't want to be fetching logging messages for developers all the time.

    Read the article

  • How do I structure code and builds for continuous delivery of multiple applications in a small team?

    - by kingdango
    Background: 3-5 developers supporting (and building new) internal applications for a non-software company. We use TFS although I don't think that matters much for my question. I want to be able to develop a deployment pipeline and adopt continuous integration / deployment techniques. Here's what our source tree looks like right now. We use a single TFS Team Project. $/MAIN/src/ $/MAIN/src/ApplicationA/VSSOlution.sln $/MAIN/src/ApplicationA/ApplicationAProject1.csproj $/MAIN/src/ApplicationA/ApplicationAProject2.csproj $/MAIN/src/ApplicationB/... $/MAIN/src/ApplicationC $/MAIN/src/SharedInfrastructureA $/MAIN/src/SharedInfrastructureB My Goal (a pretty typical promotion pipeline) When a code change is made to a given application I want to be able to build that application and auto-deploy that change to a DEV server. I may also need to build dependencies on Shared Infrastructure Components. I often also have some database scripts or changes as well If developer testing passes I want to have an manually triggered but automated deploy of that build on a STAGING server where end-users will review new functionality. Once it's approved by end users I want to a manually triggered auto-deploy to production Question: How can I best adopt continuous deployment techniques in a multi-application environment? A lot of the advice I see is more single-application-specific, how is that best applied to multiple applications? For step 1, do I simply setup a separate Team Build for each application? What's the best approach to accomplishing steps 2 and 3 of promoting latest build to new environments? I've seen this work well with web apps but what about database changes

    Read the article

  • lirc_zilog IR transmission no longer working with HD-PVR on 12.04

    - by johnf
    I have been running a ubuntu 10.04 with a patched version of lirc_zilog for two years. I upgraded to 12.04 and lirc_zilog is no longer working with my HD-PVR. The MythTV wiki reports that it did work out of the box with 11.04. The error message I get on irsend is as follows: johnf@carbon:~$ /usr/local/bin/irsend SEND_ONCE blaster 0_130_KEY_POWER irsend: command failed: SEND_ONCE blaster 0_130_KEY_POWER irsend: hardware does not support sending The lircd daemon, run interactively, reports the following: lircd: accepted new client on /var/run/lirc/lircd lircd: could not get hardware features lircd: this device driver does not support the LIRC ioctl interface lircd: major number of /dev/lirc0 is 250 lircd: LIRC major number is 61 lircd: check if /dev/lirc0 is a LIRC device lircd: WARNING: Failed to initialize hardware lircd: error processing command: SEND_ONCE blaster 0_130_KEY_POWER lircd: hardware does not support sending lircd: removed client Checking dmesg seems to indicate that the kernel module is loading properly: [56497.730743] lirc_zilog: module is from the staging directory, the quality is unknown, you have been warned. [56497.730999] lirc_zilog: Zilog/Hauppauge IR driver initializing [56497.732484] lirc_zilog: ir_probe: ir_rx_z8f0811_hdpvr on i2c-0 (Hauppage HD PVR I2C), client addr=0x71 [56497.732493] lirc_zilog: ir_probe: ir_tx_z8f0811_hdpvr on i2c-0 (Hauppage HD PVR I2C), client addr=0x70 [56497.732496] lirc_zilog: probing IR Tx on Hauppage HD PVR I2C (i2c-0) [56497.756822] lirc_zilog: firmware of size 302355 loaded [56497.756945] lirc_zilog: 743 IR blaster codesets loaded [56497.757030] i2c i2c-0: lirc_dev: driver lirc_zilog registered at minor = 0 [56497.757033] lirc_zilog: IR unit on Hauppage HD PVR I2C (i2c-0) registered as lirc0 and ready [56497.757035] lirc_zilog: probe of IR Tx on Hauppage HD PVR I2C (i2c-0) done [56497.757056] lirc_zilog: initialization complete Here is my /etc/lirc/hardware.conf #Chosen IR Transmitter TRANSMITTER="HD-PVR" TRANSMITTER_MODULES="lirc_dev lirc_zilog" TRANSMITTER_DRIVER="" TRANSMITTER_DEVICE="/dev/lirc0" TRANSMITTER_SOCKET="" TRANSMITTER_LIRCD_CONF="" TRANSMITTER_LIRCD_ARGS="" My lircd.conf is a copy of the recommended one. Examination of the kernel source seems to indicate that the lirc_zilog module should support transmission, it's newer than the patched version I was manually compiling on 10.04. I was previously using a manually built version of lirc 0.8.7 and not the packaged one. I'm now running the packaged version 9.0. I can provide any additional information required and will perform tests quickly. I'm very eager to get this issue resolved.

    Read the article

  • What can you do to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue is greatly appreciated. Thanks.

    Read the article

  • Can you run a specific tomcat Web Application under another user?

    - by Boaz
    Hi, We're developing a web-app running under tomcat which relies on Java User preferences to store all kind of settings. That works great, but we've run into problem where we needed to set up another staging web-app which allows you to test settings before settings them live. The core of the problem lies in the fact that Java User preferences are the same for all web-app due to the fact that all of them run under the tomcat user (configurable). For legacy reasons I can not at the moment change my preferences structure, so I'm hoping for a solution on the the tomcat configuration side. Is it possible to designate a different user credentials for a specific web-app in tomcat? Thanks, Boaz

    Read the article

  • What can be done to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue ?

    Read the article

  • FTP Synchronization software for Mac or PC

    - by evanmcd
    Hi, I've been using FTP Synchronizer for awhile and generally have had pretty good results with it. But, I've just moved to a Mac full-time (at work as well as home now) so want to get a native client if I can. I've tried the only one that I've found - SuperFlexibleSynchronizer - but it crashed every time I loaded up an FTP to FTP synch attempt. The most important features to me are: 1) ability to synch with a large number of files (thousands), as I generally work on sites with large number of files. 2) FTP to FTP synch. This would be very helpful as I work with some CMS based sites for which users upload files while on staging and don't want to move files locally first before moving live. Thanks! Evan

    Read the article

  • Multiple SSL domains on the same IP address and same port?

    - by johnlai2004
    I set up an ubuntu 9.10 - apache2 - php5 server. I was under the impression that each valid SSL certificate (no domain wild cards) required it's own unique IP address and port number combination. But the answer to a previous question I posted is at odds with this claim: http://serverfault.com/questions/109766/ssl-site-not-using-the-correct-ip-in-apache-and-ubuntu Using the accepted answer, I was able to get multiple domains, each with it's own valid SSL to work on the same IP address and on port 443. I am very confused as to why the above answer works, especially after hearing from others that each SSL domain website on the same server requires its own IP+port combination. I am suspicious that I did something wrong. Can someone clear up the confusion? Websites currently using different SSL but on the same IP and Port are: https://www.yummyskin.com/ https://staging.bossystem.org/

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >