Search Results

Search found 11703 results on 469 pages for 'dev to production'.

Page 143/469 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Is there a way to export reports from Microsoft CRM4?

    - by Jake
    I'm setting up a proper dev environment for my client (dev/qa/stage/prod). I'd like to find a way to export reports so we can cleanly move from one environment to the next. Custom RDL type reports are easy (just import the RDL in the next environment), but the reports that are built inside CRM don't appear to export anywhere. Am I missing something? Is this another feature that was missed? Apperciate the help.

    Read the article

  • django fcgi - call a management command with subprocess.Popen

    - by user41855
    Hi, I'm using an app called django-chronograph. It has a code of line which works in my dev environment and does not work in production: p = subprocess.Popen(['python', get_manage_py(), 'run_job', str(self.pk)]) This line crashes in production with: unknown command run_job Whereas when I run directly from command line: manage.py run_job It works fine. Interestingly it worked once when we exchanged 'python' with 'usr/bin/python'. then we restarted the server once more and it was back to old behaviour. Thus it seems as we have a python path issue. I'm not the guy who is running the server, its my app that should run and it would be great to get some help here. Attention: I'm a total noob regarding server-administration.. server environment: NGINX with FCGI-Daemon FCGI in prefork-mode

    Read the article

  • ASP.NET The underlying connection was closed: Could not establish trust relat

    - by David Lively
    When attempting to use HttpWebRequest to retrieve a page from my dev server, I get a web exception: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel." The remote certificate is invalid according to the validation procedure... The url I'm attempting to read from is a plain-old http://myserver.com/mypage.asp - no SSL. The production server has a valid certificate so this shouldn't be an issue, but our dev server doesn't. Help!

    Read the article

  • Stop Search param in directories by grep immediately after param match

    - by yael
    hi friends I use the following command to find under /var some param in my script grep -R "param" /var/* 2/dev/null |grep -wq "param" my problem is that: after grep find the param in file grep continue to search until all searches under /var/* will completed How to perform stop immediately after grep match the param word For example when I run the: grep -R "param" /var/* 2/dev/null |grep -wq "param" grep find the param after one second. But grep continue to sears other same param and its take almost 30 seconds How to stop the grep immediately after param match? THX

    Read the article

  • 4 - 7 second delay accessing Mysql across the network

    - by Kristiaan
    Hello, our company has recently purchased a new server with the intention of replacing our aging database server. its a full 64bit 2008 enterprise system, i have got the basic server setup and configured and then installed the 64bit version of mysql on the server, this has then been configured to match where possible our existing server as much as it can. however i have noticed that when it was swapped with the production database server our software systems had an increased delay accessing the mysql database this was anything beween 4 - 7 seconds. i have tried disabling TOE, IPv6 and a few other suggested soultions to this but so far cannot find out where this slowdown is coming from. replacing the server with the production one and the delay goes away. in terms of software and hardware the servers are not very identical at all due to one being windows 2003 std with a 32bit server and the new one being windows 2008 enterprise with a 64bit server. thanks Kris

    Read the article

  • Is there a way to create a consistent snapshot/SnapMirror across multiple volumes?

    - by Tomer Gabel
    We use a NetApp FAS 6-series filer with an application that spans multiple volumes. For backup purposes I would like to create a consistent snapshot that spans these volumes at the same point in time (or at least with an extremely low delta); additionally, we'd like to to use SnapMirror to replace the production environment to test volumes. The problem is in creating a consistent snapshot/SnapMirror, since these commands are not transactional and do not take multiple parameters. I tried scripting consecutive "snap create" or "snapmirror resync" commands via SSH, but there's always a 0.5-2 second difference between each snapshot. It's currently "good enough", but I'm seriously concerned about the consistency impact with increased load (we're currently in pre-production). Has anyone managed to create a consistent snapshot that spans several volumes? How did you pull it off?

    Read the article

  • jcuda library usage problem

    - by user513164
    hi m very new to java and Linux i have a code which is taken from examples of jcuda.the code is following import jcuda.CUDA; import jcuda.driver.CUdevprop; import jcuda.driver.types.CUdevice; public class EnumDevices { public static void main(String args[]) { //Init CUDA Driver CUDA cuda = new CUDA(true); int count = cuda.getDeviceCount(); System.out.println("Total number of devices: " + count); for (int i = 0; i < count; i++) { CUdevice dev = cuda.getDevice(i); String name = cuda.getDeviceName(dev); System.out.println("Name: " + name); int version[] = cuda.getDeviceComputeCapability(dev); System.out.println("Version: " + String.format("%d.%d", version[0], version[1])); CUdevprop prop = cuda.getDeviceProperties(dev); System.out.println("Clock rate: " + prop.clockRate + " MHz"); System.out.println("Threads per block: " + prop.maxThreadsPerBlock); } } } I'm using Ubuntu as my operating system i compiled it with following command 1:-javac -cp /home/manish.yadav/Desktop/JCuda-All-0.3.2-bin-linux-x86_64 EnumDevices i got following error error: Class names, 'EnumDevices', are only accepted if annotation processing is explicitly requested 1 error i don't know what is the meaning of this error.what should i do to compile the program than i changed the compiling option which is javac -cp /home/manish.yadav/Desktop/JCuda-All-0.3.2-bin-linux-x86_64 EnumDevices.java than i got following error EnumDevices.java:36: clockRate is not public in jcuda.driver.CUdevprop; cannot be accessed from outside package System.out.println("Clock rate: " + prop.clockRate + " MHz"); ^ EnumDevices.java:37: maxThreadsPerBlock is not public in jcuda.driver.CUdevprop; cannot be accessed from outside package System.out.println("Threads per block: " + prop.maxThreadsPerBlock); ^ 2 errors Now I'm completely confused i don't know what to do? how to compile this program ? how to install the jcuda package or how to use it ? how to use package which have only jar files and .so files and the jar files don't having manifest file ? please help me

    Read the article

  • Running resize2fs on /

    - by Paul Steckler
    I'm trying to resize an ext4 filesystem on a Fedora 11 box. Using fsdisk and lvm, I was able to grow the partition and logical volume containing the filesystem. When I try to run resize2fs on the device containing the filesystem (/dev/sda2 in this case), I get: "Device or resource busy while trying to open /dev/sda2, Couldn't find valid filesystem superblock" I've tried this from a rescue disk that doesn't have the filesystem mounted, no joy. Maybe resize2fs doesn't know about ext4?

    Read the article

  • Solution to time shifting requirement in Active Directory

    - by MikeR
    Hi, I currently have an active directory that has several child domains (consisting of nothing other than a DC and bespoke application servers) set-up for testing our CRM software, as some of it is date/time sensitive these have been set to dates in the future at some point in the past, which is causing replication errors. I'm working on getting rid of these child domains, but still have a requirement for our testers to be able to time shift. Does anyone know of any solutions that would allow our test environments to have their time changed (always forward), without affecting the production active directory? Is it as simple as creating a separate Forest on the same LAN or would that interfere with my production Forest? Thanks for any advice.

    Read the article

  • How do you apply development practices like version control, testing and continuous integration/deployment to system administration?

    - by arex1337
    Imagine you're going to manage a number of servers with a number of different services that's used by a number of people. Now say you want to reconfigure or replace some software on one of those servers. Obviously you don't want to work on servers that are in production. If this was a code change, as a developer, I would make the change on my local development machine, test it locally and commit the change to a version control system. The changes could then be deployed in a staging environment, tested further and finally deployed in a production environment. It would also be easy for me to roll back, if necessary. Generally, or specifically, how do you achieve this in system administration? (The first thing that comes to mind is to use virtual machines and put virtual machine images in version control, but I'm sure there is a lot of literature and clever solutions I'm not presently aware of.)

    Read the article

  • nginx returning authentication in IE

    - by James MacLeod
    I am having a few issues with an nginx server. I have a site setup that keeps requesting authentication when accessed from IE but in firefox and safari the site is fine no request for authentication. Reading around the web I can see that it could be the gzip that may be causing errors, but the other sites are working without issue. Here is the config: user sysadmin sysadmin; worker_processes 8; error_log logs/error.log debug; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-2.2.9; passenger_ruby /usr/bin/ruby1.8; include mime.types; default_type application/octet-stream; client_header_timeout 3m; client_body_timeout 3m; client_max_body_size 5m; send_timeout 3m; client_header_buffer_size 1k; large_client_header_buffers 4 4k; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; server { listen 80; server_name .reg-power.com .reg-power.co.uk .reg-power.eu .reg-power.eu.com .reg-power.net .reg-power.org .reg-power.org.uk .reg-power.uk.com .regegen.eu .regpower.co.uk .regpower.eu .regpower.eu.com .regpower.net .regpower.org .regpower.org.uk .regpower.uk.com .renegen.com .renegen.eu .renewableenergygeneration.co.uk .renewableenergygeneration.com reg.rails1.flowhost.co.uk; root /home/sysadmin/reg/current/public; passenger_enabled on; rails_env production; index index.html; } server { listen 80; server_name media.reg-power.com; root /home/sysadmin/admin/current/public; index index.html; } server { listen 80; server_name admin.reg-power.com admin.rails1.flowhost.co.uk; root /home/sysadmin/admin/current/public; passenger_enabled on; rails_env production; index index.html; } server { listen 80; server_name .livingfuels.co.uk livingfuels.rails1.flowhost.co.uk; root /home/sysadmin/livingfuels/current/public; passenger_enabled on; rails_env production; index index.html; } server { listen 80; server_name .regbiopower.com .regbiopower.co.uk regbiopower.rails1.flowhost.co.uk; root /home/sysadmin/regbiopower/current/public; passenger_enabled on; rails_env production; index index.html; } server { listen 80; server_name .clpwindprojects.co.uk clp.rails1.flowhost.co.uk; access_log /home/sysadmin/clp/logs/access.log; location / { root /home/sysadmin/clp; index index.php; if (-f $request_filename) { expires 30d; break; } if (!-e $request_filename) { rewrite ^(.+)$ /index.php?q=$1 last; } } location ~ .php$ { fastcgi_pass 127.0.0.1:49232; #this must point to the socket spawn_fcgi is running on. fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/sysadmin/clp$fastcgi_script_name; # same path as above fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT /home/sysadmin/clp; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; } } } As you can see there is no reference to a http authentication

    Read the article

  • Installing Python egg dependencies without apt-get

    - by l0b0
    I've got a Python module which is distributed on PyPI, and therefore installable using easy_install. It depends on lxml, which in turn depends on libxslt1-dev. I'm unable to install libxslt1-dev with easy_install, so it doesn't work to put it in install_requires. Is there any way I can get setuptools to install it instead of resorting to apt-get?

    Read the article

  • Varnish and plesk

    - by Raphaël
    I have an e-commerce site with 300,000 products and 20,000 categories. It is slow and currently in production. I decided to install Varnish to speed up. The trouble is that during installation, I got a Guru Meditation. Since the site is in production, I am not allowed to leave this error more than a second, thinking to have made an enormous stupidity. I followed the following tutorial: http://www.euperia.com/linux/setting-up-varnish-with-apache-tutorial I'm sure I followed all without error. I say that there may be a specific configuration with plesk. Has anyone already installed Varnish on a ubuntu 10.04 server with plesk 10? Does anyone have a better resource? I know it is "very vague" as an error, but maybe some of you have had this problem. Sincerely,

    Read the article

  • Dell Power Edge R515 - Replacing a Bad Hard Drive in a RAID

    - by LonnieBest
    I've ordered a new hard drive to replace a bad one in a Dell Power Edge R515. The manual covers obvious topics regarding physical replacing of hard drives, but I've never done this before on a production server where RAID is involved. I've heard people talk about this topic, and I've heard that some servers have RAID controllers that are smart enough to allow you to just put in the new drive (hot swap), and then the server will know automatically how to rebuild that drive to be what the old one was to the system. Where do I find the proper procedure for replacing a failed hard drive on a live production Dell Power Edge R515? Can someone with experience tell me how easy or hard this usually is?

    Read the article

  • Subversion commit review software?

    - by Long Cheng
    Is there any existing software which can help enforce code review process like below: Dev user commit their changeset with proper comments, but the changeset does not goes into subversion repository directly, it will be pending in a "review software". Reviewer can see all pending changesets in the "review software", review the changeset and decide whether to allow the change into the code trunk. The dev user will receive notification either his changeset was accepted and merged into code trunk, or was rejected.

    Read the article

  • On Solaris, how do you mount a second zfs system disk for diagnostics?

    - by Matt Ball
    I've got two hard disks in my computer, and have installed Solaris 10u8 on the first and Opensolaris 2010.3 (dev onnv_134) on the second. Both systems uses ZFS and were independently created with a zpool name of 'rpool'. While running Solaris 10u8 on the first disk, how do I mount the second ZFS hard disk (at /dev/dsk/c1d1s0) on an arbitrary mount point (like /a) for diagnostics?

    Read the article

  • Oracle Logical Standby redo generation

    - by DCookie
    Oracle 10.2.0.4 database with a logical standby on Win2K3. Recently a rather large delete operation was carried out on the production instance. I'm experiencing difficulty with the logical standby, in that it gets a couple of hundred (58M size) archive logs into the operation and the apply process fails with an out-of-memory error. Unfortunately, every time it fails it has to restart the apply from the beginning of the transaction. This is taking a couple of days each time. Anyway, in trying to resolve this problem, I've noticed that each archive log from the production system generates 5 or 6 log switches on the standby. I don't understand why this should be. Anyone have any ideas? A related question that I've not found the answer for: does anyone know if the logical standby must be running in archivelog mode? I really don't have a need to keep the logs.

    Read the article

  • kloxo setup error

    - by ron
    Hi, i've just purchased vps from 2host, its unmanaged. the support told me to install kloxo. However i got the ff errors in step 1: --begin-- root@vpshostingtips:~# wget http:// download.lxlabs . com/download/kloxo/production/kloxo-install-master.sh --2010-05-06 04:17:04-- http:// download.lxlabs . com/download/kloxo/production/kloxo-inst all-master.sh Resolving download.lxlabs.com... failed: Temporary failure in name resolution. wget: unable to resolve host address `download.lxlabs.com' root@vpshostingtips:~# --end-- Note: i split the hyperlinks intentionally to post here can somebody tell me whats the reason for error? sorry so new to vps.

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • KVM to Xen migration

    - by qweet
    I've recently been appointed to create some VMs for production use, and went gung ho into making a KVM based VM instead of finding out what our production server uses. I've only recently found out though that our own servers use Xensource OS, and don't look like they're going to be upgraded in the near future. So for the moment, I'm stuck with either two choices- attempting to convert the KVM VM into a Xen VM, or rebuilding what I have into a new Xen VM. Being the lazy person I am, I would rather not have to rebuild the VM. I've looked for some documentation on a procedure to do this, but the only thing I can come up with is an ancient article with some vague instructions. So this is my question, Server Fault- can one migrate a KVM running on a KVM kernel to a Xen kernel? And if so, how?

    Read the article

  • Can ping localhost but can't browse

    - by Anna
    I know this is a pretty common question but I did my research and couldn't find a solution for this issue. I'm configuring a development application server and I came to the point where I can ping both localhost and 127.0.0.1, but I cannot browse either of them from IE or Firefox. I can browse and ping other websites (such as google) just fine. I tried flushing the dns (ipconfig /flushdns), restarting the IIS Admin service, restarting IIS itself, etc, and nothing seems to work. The results from ipconfig /all shows IP Rounting Enabled = No and WINS Proxy Enabled = No. Hwat is intriguing to me is that I compared everything in IIS in the dev environment with the production environment and the settings are the same, but I can browse localhost in production, but not in dev! What could be causing the inability to browse localhost and 127.0.0.1 from IE and Firefox?

    Read the article

  • Is Rsync like subversion, but for a server?

    - by johnlai2004
    I'm trying to learn how to use rsync. I want to create daily backs up of my production server. Right now I run the command rsync -azr /var/www/* [email protected]:/var/www Now let's say one day, I want to roll back the /var/www/ directory on my production server to last month's version. How do I tell rsync to retrieve version N? On reading that rsync only copies differences between src and dest, I assumed rsync works like subversion where you commit changes to a destination, and keep track of every version, and with the option to checkout any version at anytime. Is that the way rsync works? It's like subversion but for an entire server? That would be great because then it means I don't have to do full ssh copies for my nightly backups.

    Read the article

  • SQL-Server 2008 : Table Insert and Range Check ?

    - by LB .
    I'm using the Table Value constructor to insert a bunch of rows at a time. However if i'm using sql replication, I run into a range check constraint on the publisher on my id column managed automatically. The reason is the fact that the id range doesn't seem to be increased during an insert of several values, meaning that the max id is reached before the actual range expansion could occur (or the id threshold). It looks like this problem for which the solution is either running the merge agent or run the sp_adjustpublisheridentityrange stored procedure. I'm litteraly doing something like : INSERT INTO dbo.MyProducts (Name, ListPrice) VALUES ('Helmet', 25.50), ('Wheel', 30.00), ((SELECT Name FROM Production.Product WHERE ProductID = 720), (SELECT ListPrice FROM Production.Product WHERE ProductID = 720)); GO What are my options (if I don't want or can't adopt any of the proposed solution) ? Expand the range ? Decrease the threshold ? Can I programmatically modify my request to circumvent this problem ? thanks.

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >