Search Results

Search found 23220 results on 929 pages for 'default constraint'.

Page 509/929 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • Different keyboard layouts at same time for different devices

    - by Joao Carlos
    I have a MacBook Pro running Snow Leopard, and I am using it as a portable but also desktop computer. In order to acheive this, I bought a VGA adapter and I am using it with an external mouse, keyboard, and monitor. The problem is, the external keyboard layout is different from the one on the macbook. It is set to US qwerty (its a logitech G15), and because of that, most keys behave differently than they should. Question is, how can I set up different layouts at same time for different devices? I want US for external keyboard and PT for the default one.

    Read the article

  • phpmyadmin port change?

    - by Rajat
    How do i change my default phpmyadmin port to 443 or 9999? Is it possible or do I have use port 80 only? If possible, then how do I change share the same? Apache is listening on port 9999 for sure. However, going to URL http://<webserver>:9999/phpmyadmin/ Will give following error (with Firefox browser) An error occurred during a connection to webserver:9999. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) Anyone has any clue what is going on?

    Read the article

  • Can not boot CentOS VM using VirtIO in KVM

    - by Jake
    I converted qcow2 image to raw and changed I/O bus to VirtIO for a VM. now I can't boot that VM. I Installed VirtIO driver with following command: mkinitrd --with virtio_pci --with virtio_blk -f /boot/initrd-$(uname -r).img $(uname -r) and these are related kernel modules: virtio_balloon 11329 0 virtio_blk 11593 3 virtio_pci 11845 0 virtio_ring 8513 1 virtio_pci virtio 9541 3 virtio_balloon,virtio_blk,virtio_pci and this is what happens during boot-up. I also changed /boot/grub/device.map from "(hd0) /dev/sda" to "(hd0) /dev/vda" but problem still exists. any ideas how to fix this ? This is my default option to boot: title CentOS (2.6.18-308.13.1.el5) root (hd0,0) kernel /vmlinuz-2.6.18-308.13.1.el5 ro root=/dev/VolGroup00/LogVol00 initrd /initrd-2.6.18-308.13.1.el5.img

    Read the article

  • Issue with PHP and osx 10.7 - runs via command line but not in browser

    - by jnolte
    I recently removed MAMP as I wanted to have more control over my machine and wanted to make use of PHP5.4 I installed using the script located here I cannot now not even get my default PHP that is built in to osx to work. I am running this script with a simple In a document in my ~/Sites directory. I am really at a loss as to why this will not work. I have php5 installed in my /usr/local directory via the link provided above and it seems like the main php is installed in /usr/bin Any and all insight on how to debug this would be greatly appreciated.

    Read the article

  • Management Reporter Installation – Lessons Learned

    - by Ryan McBee
    After successfully completing several installations of Management Reporter this year, I wanted to share a few lessons learned that should help you. First, you will want to make sure that you install Management Reporter under a domain account as opposed to a local system or network service account. Management Reporter gives you the option to install under these accounts, but it is a be a best practice approach to use a domain account. Upon installation of Management Report, you will want to make sure that Directory Browsing is enabled within the IIS server of your site or you will have problems when you go to use Management Reporter. By default, it will be disabled in Server 2008 R2 and you will need to make the setting change under the Actions pane shown below. Lastly, you will want to make sure that SQL Server is running under a domain account. I have had multiple situations where reports have been stuck in the Queued status rather than Processing status of Management Reporter. After reviewing resolution 5 of KB 2298248, it was determined that running SQL Server under a domain account is the way to go.

    Read the article

  • "Size mismatch" apt error when installing openJDK

    - by siddanth
    when i try install openjdk-7-jre-headless i am getting the following error: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: ca-certificates-java icedtea-7-jre-jamvm java-common libcups2 libjpeg62 liblcms2-2 libnspr4 libnss3 libnss3-1d openjdk-7-jre-lib tzdata tzdata-java Suggested packages: default-jre equivs cups-common liblcms2-utils libnss-mdns sun-java6-fonts ttf-dejavu-extra ttf-baekmuk ttf-unfonts ttf-unfonts-core ttf-sazanami-gothic ttf-kochi-gothic ttf-sazanami-mincho ttf-kochi-mincho ttf-wqy-microhei ttf-wqy-zenhei ttf-indic-fonts-core ttf-telugu-fonts ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts The following NEW packages will be installed: ca-certificates-java icedtea-7-jre-jamvm java-common libcups2 libjpeg62 liblcms2-2 libnspr4 libnss3 libnss3-1d openjdk-7-jre-headless openjdk-7-jre-lib tzdata-java The following packages will be upgraded: tzdata 1 upgraded, 12 newly installed, 0 to remove and 122 not upgraded. Need to get 41.2 MB/43.5 MB of archives. After this operation, 64.0 MB of additional disk space will be used. Get:5 http://in.archive.ubuntu.com/ubuntu/ oneiric/main java-common all 0.42ubuntu2 [62.4 kB] Fetched 41.1 MB in 4min 5s (167 kB/s) Failed to fetch http://in.archive.ubuntu.com/ubuntu/pool/main/j/java-common/java-common_0.42ubuntu2_all.deb Size mismatch E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? am unable to solve this. Am i missing something? please help me out in solving this.

    Read the article

  • HP DL180 G6 P410 8x SATA 1TB, what is the optimal configuration?

    - by Oneiroi
    I have a HP DL180 G6 with a P410 raid controller. Presently this runs using 4x 1TB Samsung Spinpoint SATA drives, in a RAID10 configuration using default settings. I am about to add a backplane to increase the drive capacity from 4 to 12 drives, and I plan to install 4 more 1TB SATA Drives. The drives are matched and have close serial numbers (They arrived together in the Manufacturers pallet). Model HD103UJ 1000GB/7200rpm/32M Rated for 3GB/s I will also be installing RHEL 6.1 x86_64. My question is what would be the optimal RAID settings (stripe etc.) for this configuration? To recap: 8x Model HD103UJ 1000GB/7200rpm/32M Rated for 3GB/s RAID 10 configuration. Thanks in advance. Update for role: Server is to become an iscsi target for an internal openstack deployment currently underway. (Glance) Will also provide virtualisation through KVM

    Read the article

  • OSB and Ubuntu 10.04 - Too Many Open Files

    - by jeff.x.davies
    When installing the latest Oracle Service Bus (11gR1PS3) onto my Ubuntu 10.04 system, the Eclipse IDE was complaining about there being too many open files. The Oracle Service Bus and the Oracle Enterprise Pack for Eclipse (aka OEPE) do make use of ALOT of files. By default, Ubuntu will restrict each user to 1024 open files. A much more realistic number for OSB development is 4096. Changing the file limit in Ubuntu is fairly simple (if arcane). You will need to modify two different files and then restart your server. First, you need to modify the limits.conf file as the root user. Open a terminal window and enter the following command: sudo gedit /etc/security/limits.conf Add the following 2 lines to the file. The asterisk simply means that the rule will apply to all users. * soft nofile 4096 * hard nofile 4096 Save your changes and close gedit. The second file to change is the common-session file. Use the following command: sudo gedit /etc/pam.d/common-session Add the following line: session required pam_limits.so Save the file and exit gedit. Restart your machine. You shouldn't have any more problems with too many open files anymore.

    Read the article

  • Warning message in boot.ini

    - by MA1
    Hi everyone, I have a dual boot system with Windows XP Pro and Windows 7. Following are the contents of my system's boot.ini. ;Warning: Boot.ini is used on Windows XP and earlier operating systems. ;Warning: Use BCDEDIT.exe to modify Windows Vista boot options. ; [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Professional" /NOEXECUTE=OPTIN /FASTDETECT I just want to know about first two warning lines, whether these two lines are always present in dual boot system when the boot process is different for installed operating systems, for example xp + vista/w7 or windows2000 + vista/w7 etc? Regards,

    Read the article

  • what could cause a script to fail to find python when it has `#!/usr/bin/env python` in the first line?

    - by jcollum
    Trying to get casperjs running on Ubuntu 12.04. After installing it when I run I get: 09:20 $ ll /usr/local/bin/casperjs lrwxrwxrwx 1 root root 26 Nov 6 16:49 /usr/local/bin/casperjs -> /opt/casperjs/bin/casperjs 09:20 $ /usr/bin/env python --version Python 2.7.3 09:20 $ cat /opt/casperjs/bin/casperjs | head -4 #!/usr/bin/env python import os import sys 09:20 $ casperjs : No such file or directory 09: 22 $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 So Python is present and runnable, casperjs is pointing to the right place and it is a python script. But when I run it I get "No such file". I can fix it by changing the first line of the casperjs python file from: #!/usr/bin/env python to: #!/usr/bin/python Result: $ casperjs --version 1.1.0-DEV I managed to fix it, but I'm wondering why it didn't work with #!/usr/bin/env python, since that seems to be a normal interpreter line. Do I have something configured wrong? Here are the steps to get casperjs: $ git clone git://github.com/n1k0/casperjs.git $ cd casperjs $ ln -sf `pwd`/bin/casperjs /usr/local/bin/casperjs $ casperjs : No such file or directory

    Read the article

  • Bridging Network Devices with Multiple IPs

    - by Andy
    I have a small server with a single NIC that I am trying to get a bridge functioning on so that I can run KVM. On this NIC I have a couple IPs statically assigned to it: eth0 = 192.168.1.1 eth0:1 = 192.168.1.2 eth0:2 = 192.168.1.3 eth0:3 -> Assign the bridge to this I am attempting to set up a bridge using the following instructions: sudo brctl addbr br0 sudo brctl addif br0 eth0:3 sudo ifconfig br0 192.168.1.120 netmask 255.255.255.0 up sudo route add -net 192.168.1.0 netmask 255.255.255.0 br0 sudo route add default gw 192.168.1.1 br0 sudo tunctl -b -u root -t tap0 > /dev/null sudo ifconfig tap0 up sudo brctl addif br0 tap0 However, when I do the second command: sudo brctl addif br0 eth0:3 It puts the ENTIRE eth0 device into promiscuous mode. This knocks the server offline and inaccessible by anything other than locally. Is there a way to bridge JUST eth0:3 to br0 and not put the entire device into promiscuous mode?

    Read the article

  • New apache on mac

    - by Keith
    I have installed php5 apache2 mysql5 and postgresql84 using MacPorts. I realize my mac already has apache but it didn't have apache2 nor postgesql hooked up to use with php. I want to not use the default apple apache and use the new macports install. How do I tell my computer to stop looking at the old apache? When I do apachectl in the terminal I believe it is using the old apache. I would like to hook it up to use the new one. How would I do that? The new stuff is installed at /opt/local/apache2 and the old stuff is installed at /private/etc/apache2 I went to system preferences...Sharing...and shut off Web Sharing but when I do apachectl's that turns it on and off in the preferences. I'm running in Snow Leopard.

    Read the article

  • Sharepoint 2007: Edit vs Read Only Mode

    - by user29116
    Sorry about the title, dont' really know what it should be. If I open a doc in read only mode I'm able to press save and then it opens up a save as box and the default directory is the directory on the sharepoint server and if you press save you save it to the server. This actually makes the whole process not really "read only" mode since I could actually update the document. Is there a way to prevent this from happening so that if someone chooses read only there is no way possible to updload any changes back to the sharepoint site? Also, it has been suggested as a solution to get rid of the edit/read only option so that people have to check out the document. Is there a way to remove the edit/read only option on documents?

    Read the article

  • Task Manager: VM Size smaller than Mem usage?

    - by shoosh
    The windows XP tasks manager can show two different columns regarding the memory usage of the processes. One is called Mem Usage and the other is VM Size (not on by default, you need to activate it) From what I've gathered, VM size is the size of the entire memory space occupied by the process and Mem Usage is the amount of memory currently committed and used. This assumption is verified by most processes when the VM Size is only slightly larger than Mem Usage for instance my Outlook currently has 79,724 K in VM Size and 56,600 K in Mem Usage But it fails for other processes such as Firefox which currently has 171,900 K for Mem Usage and only 156,440 K in VM Size. How can a process use more memory than the amount of virtual memory allocated to it? So Maybe my interpretation of these columns is wrong. What do they actually mean?

    Read the article

  • Permission denied on /dev/xvdf despite sudo?

    - by Sid
    I'm trying to get a stream off port 9999 and write to /dev/xvdf. I'm using Amazon EC2 with the Ubuntu 11.10 server image. By default I log in as 'ubuntu' which has sudo privileges. However, when I run the following netcat command, I get this error ubuntu@ip-10-252-35-122:~$ ls -al /dev/xv* brw-rw---- 1 root disk 202, 1 2011-11-30 22:22 /dev/xvda1 brw-rw---- 1 root disk 202, 80 2011-11-30 22:27 /dev/xvdf ubuntu@ip-10-252-35-122:~$ sudo netcat -p 9999 -l > /dev/xvdf bash: /dev/xvdf: Permission denied ubuntu@ip-10-252-35-122:~$ Any idea why I get the permission denied error and how I can work around it? Update: Something mysterious is running in the background that resets the permissions? Check the snippet below and the permission flags seem to automatically reset when I try using /dev/xvdf !?! ubuntu@ip-10-252-35-122:~$ sudo chmod 777 /dev/xvdf ubuntu@ip-10-252-35-122:~$ ls -al /dev/xvdf brwxrwxrwx 1 root disk 202, 80 2011-11-30 22:43 /dev/xvdf ubuntu@ip-10-252-35-122:~$ sudo nc -p 9999 -l > /dev/xvdf This is nc from the netcat-openbsd package. An alternative nc is available in the netcat-traditional package. usage: nc [-46DdhklnrStUuvzC] [-i interval] [-P proxy_username] [-p source_port] [-s source_ip_address] [-T ToS] [-w timeout] [-X proxy_protocol] [-x proxy_address[:port]] [hostname] [port[s]] ubuntu@ip-10-252-35-122:~$ ls -al /dev/xvdf brw-rw---- 1 root disk 202, 80 2011-11-30 22:43 /dev/xvdf ubuntu@ip-10-252-35-122:~$ I'm using stock Ubuntu Amazon EC2 images from http://alestic.com/ (links to images at the top)

    Read the article

  • Specify a custom dictionary for FxCop and Visual Studio source analysis

    - by Marko Apfel
    Renaming the default custom dictionary from CustomDictionary.xml to an other name – for instance FxCop.CustomDictionary.xml needs some additional changes to work in involved applications. Visual Studio Team System code analysis For Visual Studio Team System code analysis this file should be added as a link to all projects and setted to be the Build Action CodeAnalysisDirectory. Build target In a build target the command line tool FxCopCmd should be called with the /dictionary parameter: <Target Name="FxCop"> <Exec Command="&quot;$(ProjectDir)..\..\build\FxCop\FxCopCmd.exe&quot; /file:&quot;$(TargetPath)&quot; /project:&quot;$(ProjectDir)..\EsriDE.SfgPraxair.FxCop&quot; /directory:&quot;$(ProjectDir)..\..\lib\Esri.ArcGIS&quot; /directory:&quot;$(ProjectDir)..\..\lib\Microsoft&quot; /dictionary:&quot;$(ProjectDir)..\FxCop.CustomDictionary.xml&quot; /out:&quot;$(OutDir)..\$(ProjectName).FxCopReport.xml&quot; /console /forceoutput /ignoregeneratedcode"> </Exec> <Message Text="FxCop finished." /> </Target> FxCop-GUI (standalone application) In FxCop-GUI is no option to specify an own file name – but you could add a hint in the FxCop project file. Open your this file and look for the line: <CustomDictionaries SearchFxCopDir="True" SearchUserProfile="True" SearchProjectDir="True" /> Then change it to: <CustomDictionaries SearchFxCopDir="True" SearchUserProfile="True" SearchProjectDir="True"> <CustomDictionary Path="FxCop.CustomDictionary.xml"/> </CustomDictionaries> Ready :-)

    Read the article

  • How Ubuntu cloud version enforces the "no root login" over ssh ?

    - by Maxim Veksler
    Hello, I'm looking to tweak ubuntu cloud version default setup where is denies root login. Attempting to connect to such machine yields: maxim@maxim-desktop:~/workspace/integration/deployengine$ ssh [email protected] The authenticity of host 'ec2-204-236-252-95.compute-1.amazonaws.com (204.236.252.95)' can't be established. RSA key fingerprint is 3f:96:f4:b3:b9:4b:4f:21:5f:00:38:2a:bb:41:19:1a. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ec2-204-236-252-95.compute-1.amazonaws.com' (RSA) to the list of known hosts. Please login as the ubuntu user rather than root user. Connection to ec2-204-236-252-95.compute-1.amazonaws.com closed. I would like to know where this is setup and how I can change the printed message? Thank you, Maxim.

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • Creating self-signed SSL on IIS - Remote access problem

    - by ile
    I followed these instructions to create self-signed ssl: http://www.visualwin.com/SelfSSL/ (I opened SelfSSL and typed selfssl /T) When I access https: //localhost/ than it works, but when I try to access it remotely (i set up my router to port forward to localhost), for example https: //myip the page does not load. Also, I noticed one other thing. When I access localhost locally then I am asked to enter user/pass, but if I access remotely the I get the following warning: Under Construction The site you were trying to reach does not currently have a default page. It may be in the process of being upgraded and configured. ... I don't know if it is related with this but I hope someone know the answer. Thanks, Ile

    Read the article

  • Focus follows mouse stops working when opening window from launcher and no click to focus

    - by user97600
    This is 12.04 default desktop (unity). I set it to focus follows mouse, and changed the menus to be on the window. This worked for a while, then some unknown even, maybe an upgrade maybe some other setting change caused it to stop working. There are many ways for this behavior to start but one reliable one is to bring a window to the foreground/focus with the launcher. Now the focus is stuck on that window and not just the window but the regions within the window so the close, maximize, minimize and menus do not work. I have to use mouse middle and then mouse right and then focus follows mouse is restored for a bit. The exact details of the mouse action aren't clear, sometimes it seems like just mouse middle helps, sometimes just right some times a desperate sequence of clicks :-( I have tried switching to the gnome desktop and it seems to occur less there but it is not eliminated. I have tried switching mice to an old wired USB mouse. I have tried creating a new account and that has not worked. I have observed "split focus" where to scroll button scrolls one one window but the input goes to another. I go trapped recently where my keyboard input went to libre office calc, but I was selecting the search term in the chrome address window. The selection "grayed" but the keyboard input for the search went to libre. Regions in windows have very confused focus. I have to work hard to get focus on for example the close gliph (X) or the minimize gliph (_).

    Read the article

  • OBIEE 11.1.1 - How to enable HTTP compression and caching in Oracle iPlanet Web Server

    - by Ahmed Awan
    1. To implement HTTP compression / caching, install and configure Oracle iPlanet Web Server 7.0.x for the bi_serverN Managed Servers (refer to document http://docs.oracle.com/cd/E23943_01/web.1111/e16435/iplanet.htm) 2. On the Oracle iPlanet Web Server machine, open the file Administrator's Configuration (obj.conf) for editing. (Guidelines for modifying the obj.conf file is available at http://download.oracle.com/docs/cd/E19146-01/821-1827/821-1827.pdf) 3. Add the following lines in obj.conf file inside <Object name="default"> . </Object> and restart the Oracle iPlanet Web Server machine: #HTTP Caching <If $path =~ '^(.*)\.(jpg|jpeg|gif|png|css|js)$'> ObjectType fn="set-variable" insert-srvhdrs="Expires:$(httpdate($time + 864000))" </If>   <If $path =~ '^(.*)\.(jpg|jpeg|gif|png|css|js)$'> PathCheck fn="set-cache-control" control="public,max-age=864000" </If>   #HTTP Compression   Output fn="insert-filter" filter="http-compression" vary="false" compression-level="9" fragment_size="8096"

    Read the article

  • Getting jerky backgrounds on Win7 on iMac 27'' 2560x1440

    - by JohnIdol
    I installed Win7 on bootcamp on my new iMac 27'' (ATI videocard) and everything was good until recently I noticed that the default win7 background (then one on the background on login) looked jerky. When I say jerky I mean the kind of jerky you get if you can't display enough colours, and instead of nice fading shades you just get stripes and jerky patterns. I am on native resolution but even if I go down to 1920x1080 I get the same. This might have happened after a firmware update but as I don't use windows very often I am not too sure it's what caused it. Oh, and when I am playing games everything looks OK (as in not jerky!). Any help appreciated!

    Read the article

  • IdentityServer Beta 1 Refresh &amp; Windows Azure Support

    - by Your DisplayName here!
    I just uploaded two new releases to Codeplex. IdentityServer B1 refresh A number of bug fixes and streamlined extensibility interfaces. Mostly a result of adding the Windows Azure support. Nothing has changed with regards to setup. Make sure you watch the intro video on the Codeplex site. IdentityServer B1 (Windows Azure Edition) I have now implemented all repositories for Windows Azure backed data storage. The default setup assumes you use the ASP.NET SQL membership provider database in SQL Azure and Table Storage for relying party, client certificates and delegation settings. The setup is simple: Upload your SSL/signing certificate via the portal Adjust the .cscfg file – you need to insert your storage account, certificate thumbprint and distinguished name There is a setup tool that can automatically insert the certificate distinguished names into your config file Adjust the connection string for the membership database in WebSite\configuration\connectionString.config Deploy Feedback Feature-wise this looks like the V1 release to me. It would be great if you could give me feedback, when you find a bug etc. – especially: Do the built-in repository implementations work for you (both on-premise and Azure)? Are the repository interfaces enough to add you own data store or feature?

    Read the article

  • Self hosted PHP shopping cart with no storefront?

    - by Question
    I am looking for a shopping cart to implement on a simple website instead of the default paypal cart that is used with their add to cart buttons (I don't like the non-styeable new tab/window cart). However, I really like the ability to simply add the buttons to existing pages. I do not have a lot of products and do not want to deal with a storefront and complex templates. The main features I need: Self-hosted Easy to implement with existing website (copy and paste button code, etc.) Ability to have variations on one button with different prices (dropdown with sizes, colors, etc.) Ability to track inventory and disallow out of stock orders Ability to pass cart details to PayPal Website Payments Standard I have seen most of the large storefront options: oscommerce, zencart, cubecart, opencart, prestashop, magento, cs-cart, lemonstand, etc. but these are way more than I need. I don't need the storefront or customer accounts or templated pages, etc. I have seen e-junkie, which is not far off from what I would like, but it is not self-hosted and I would prefer an in-site cart (or dynamic overlay cart) rather than a lightbox or new tab/window cart. I also love the paypal minicart and its implementation, but there is no way to track inventory. So, does anyone have any recommendations that might meet these requests?

    Read the article

  • How to install an init.d script in ubuntu?

    - by suhail
    i am trying to install an init.d script, to run celery for scheduling tasks. Here is the steps i followed: copied the file celeryd and pasted it in folder /etc/init.d/ created a configuration file celeryd in folder /etc/default/ now when i tried to start it by sudo /etc/init.d/celeryd start, it throws error sudo: /etc/init.d/celeryd: command not found I googled about how to install init.d, i got this SO-question. it says to issue a uname -a and when i does i get this: Linux capsonesystem8-desktop 3.2.0-43-generic-pae #68-Ubuntu SMP Wed May 15 03:55:10 UTC 2013 i686 i686 i386 GNU/Linux and also it says use utils like insserv to enable init.d script so tried: insserv /etc/init.d/celeryd but it throws error insserv: command not found so i tried to install insserv sudo apt-get install insserv. but it say aleady installed: insserv is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 222 not upgraded. So how to install init.d script?? Any help will be appreciated. update1: when i tried: $ sh -x /etc/init.d/celeryd start it reveal some errors. may be that is why the service won’t start. update2: I cleared all the errors when i run $ sh -x /etc/init.d/celeryd start but still sudo /etc/init.d/celeryd start throws command not found error

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >