Search Results

Search found 29820 results on 1193 pages for 'default implementation'.

Page 688/1193 | < Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >

  • "Size mismatch" apt error when installing openJDK

    - by siddanth
    when i try install openjdk-7-jre-headless i am getting the following error: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: ca-certificates-java icedtea-7-jre-jamvm java-common libcups2 libjpeg62 liblcms2-2 libnspr4 libnss3 libnss3-1d openjdk-7-jre-lib tzdata tzdata-java Suggested packages: default-jre equivs cups-common liblcms2-utils libnss-mdns sun-java6-fonts ttf-dejavu-extra ttf-baekmuk ttf-unfonts ttf-unfonts-core ttf-sazanami-gothic ttf-kochi-gothic ttf-sazanami-mincho ttf-kochi-mincho ttf-wqy-microhei ttf-wqy-zenhei ttf-indic-fonts-core ttf-telugu-fonts ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts The following NEW packages will be installed: ca-certificates-java icedtea-7-jre-jamvm java-common libcups2 libjpeg62 liblcms2-2 libnspr4 libnss3 libnss3-1d openjdk-7-jre-headless openjdk-7-jre-lib tzdata-java The following packages will be upgraded: tzdata 1 upgraded, 12 newly installed, 0 to remove and 122 not upgraded. Need to get 41.2 MB/43.5 MB of archives. After this operation, 64.0 MB of additional disk space will be used. Get:5 http://in.archive.ubuntu.com/ubuntu/ oneiric/main java-common all 0.42ubuntu2 [62.4 kB] Fetched 41.1 MB in 4min 5s (167 kB/s) Failed to fetch http://in.archive.ubuntu.com/ubuntu/pool/main/j/java-common/java-common_0.42ubuntu2_all.deb Size mismatch E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? am unable to solve this. Am i missing something? please help me out in solving this.

    Read the article

  • Request Tracker 4.x on Ubuntu 12.04

    - by rihatum
    I have a Ubuntu 12.04 server installed on my machine. I am trying to install request-tracker4. Here's what I have done so far : a) Installed request-tracker via "sudo apt-get install request-tracker4" b) I then tried configuring RT_SiteConfig.pm in /etc/request-tracker4 but then ran into problems in populating the MySQL database. c) I then did sudo dpkg-reconfigure request-tracker4 d) It solved my problems of not being able to populate / setup mysql etc. e) Now, I am trying to setup rt under www.mydomain.com/rt I have read various how-to's and bestpractical's own guides but I am not very much a expert in Apache configurations so stuck. My Current Ubuntu 12.04 server setup: Apache2, Fastcgi installed (checked in /etc/apache2/mods-enabled Web Server document root is default /var/www/ Web user www-data Question is : 1 ) Where and What shall I put in Apache configuration to start using RT via the web-interface ? I have seen two files in /etc/request-tracker4/ apache2-fastcgi.conf and apache2-fcgid.conf I even tried making a ln -s apache2-fastcgi.conf /etc/apache2/conf.d but when I tried opening that file in root while in the conf.d directory it said too many levels. Any request tracker experts on ubuntu ?:-) Your help will be very useful and appreciated Thanks Please let me know if you need further info !

    Read the article

  • OSB and Ubuntu 10.04 - Too Many Open Files

    - by jeff.x.davies
    When installing the latest Oracle Service Bus (11gR1PS3) onto my Ubuntu 10.04 system, the Eclipse IDE was complaining about there being too many open files. The Oracle Service Bus and the Oracle Enterprise Pack for Eclipse (aka OEPE) do make use of ALOT of files. By default, Ubuntu will restrict each user to 1024 open files. A much more realistic number for OSB development is 4096. Changing the file limit in Ubuntu is fairly simple (if arcane). You will need to modify two different files and then restart your server. First, you need to modify the limits.conf file as the root user. Open a terminal window and enter the following command: sudo gedit /etc/security/limits.conf Add the following 2 lines to the file. The asterisk simply means that the rule will apply to all users. * soft nofile 4096 * hard nofile 4096 Save your changes and close gedit. The second file to change is the common-session file. Use the following command: sudo gedit /etc/pam.d/common-session Add the following line: session required pam_limits.so Save the file and exit gedit. Restart your machine. You shouldn't have any more problems with too many open files anymore.

    Read the article

  • Tellago releases a RESTful API for BizTalk Server business rules

    - by Charles Young
    Jesus Rodriguez has blogged recently on Tellago Devlabs' release of an open source RESTful API for BizTalk Server Business Rules.   This is an excellent addition to the BizTalk ecosystem and I congratulate Tellago on their work.   See http://weblogs.asp.net/gsusx/archive/2011/02/08/tellago-devlabs-a-restful-api-for-biztalk-server-business-rules.aspx   The Microsoft BRE was originally designed to be used as an embedded library in .NET applications. This is reflected in the implementation of the Rules Engine Update (REU) Service which is a TCP/IP service that is hosted by a Windows service running locally on each BizTalk box. The job of the REU is to distribute rules, managed and held in a central database repository, across the various servers in a BizTalk group.   The engine is therefore distributed on each box, rather than exploited behind a central rules service.   This model is all very well, but proves quite restrictive in enterprise environments. The problem is that the BRE can only run legally on licensed BizTalk boxes. Increasingly we need to deliver rules capabilities across a more widely distributed environment. For example, in the project I am working on currently, we need to surface decisioning capabilities for use within WF workflow services running under AppFabric on non-BTS boxes. The BRE does not, currently, offer any centralised rule service facilities out of the box, and hence you have to roll your own (and then run your rules services on BTS boxes which has raised a few eyebrows on my current project, as all other WCF services run on a dedicated server farm ).   Tellago's API addresses this by providing a RESTful API for querying the rules repository and executing rule sets against XML passed in the request payload. As Jesus points out in his post, using a RESTful approach hugely increases the reach of BRE-based decisioning, allowing simple invocation from code written in dynamic languages, mobile devices, etc.   We developed our own SOAP-based general-purpose rules service to handle scenarios such as the one we face on my current project. SOAP is arguably better suited to enterprise service bus environments (please don't 'flame' me - I refuse to engage in the RESTFul vs. SOAP war). For example, on my current project we use claims based authorisation across the entire service bus and use WIF and WS-Federation for this purpose.   We have extended this to the rules service. I can't release the code for commercial reasons :-( but this approach allows us to legally extend the reach of BRE far beyond the confines of the BizTalk boxes on which it runs and to provide general purpose decisioning capabilities on the bus.   So, well done Tellago.   I haven't had a chance to play with the API yet, but am looking forward to doing so.

    Read the article

  • How Ubuntu cloud version enforces the "no root login" over ssh ?

    - by Maxim Veksler
    Hello, I'm looking to tweak ubuntu cloud version default setup where is denies root login. Attempting to connect to such machine yields: maxim@maxim-desktop:~/workspace/integration/deployengine$ ssh [email protected] The authenticity of host 'ec2-204-236-252-95.compute-1.amazonaws.com (204.236.252.95)' can't be established. RSA key fingerprint is 3f:96:f4:b3:b9:4b:4f:21:5f:00:38:2a:bb:41:19:1a. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ec2-204-236-252-95.compute-1.amazonaws.com' (RSA) to the list of known hosts. Please login as the ubuntu user rather than root user. Connection to ec2-204-236-252-95.compute-1.amazonaws.com closed. I would like to know where this is setup and how I can change the printed message? Thank you, Maxim.

    Read the article

  • New apache on mac

    - by Keith
    I have installed php5 apache2 mysql5 and postgresql84 using MacPorts. I realize my mac already has apache but it didn't have apache2 nor postgesql hooked up to use with php. I want to not use the default apple apache and use the new macports install. How do I tell my computer to stop looking at the old apache? When I do apachectl in the terminal I believe it is using the old apache. I would like to hook it up to use the new one. How would I do that? The new stuff is installed at /opt/local/apache2 and the old stuff is installed at /private/etc/apache2 I went to system preferences...Sharing...and shut off Web Sharing but when I do apachectl's that turns it on and off in the preferences. I'm running in Snow Leopard.

    Read the article

  • SQL SERVER – Follow up – Usage of $rowguid and $IDENTITY

    - by pinaldave
    The most common question I often receive is why do I blog? The answer is even simpler – I blog because I get an extremely constructive comment and conversation from people like DHall and Kumar Harsh. Earlier this week, I shared a conversation between Madhivanan and myself regarding how to find out if a table uses ROWGUID or not? I encourage all of you to read the conversation here: SQL SERVER – Identifying Column Data Type of uniqueidentifier without Querying System Tables. In simple words the conversation between Madhivanan and myself brought out a simple query which returns the values of the UNIQUEIDENTIFIER  without knowing the name of the column. David Hall wrote few excellent comments as a follow up and every SQL Enthusiast must read them first, second and third. David is always with positive energy, he first of all shows the limitation of my solution here and here which he follows up with his own solution here. As he said his solution is also not perfect but it indeed leaves learning bites for all of us – worth reading if you are interested in unorthodox solutions. Kumar Harsh suggested that one can also find Identity Column used in the table very similar way using $IDENTITY. Here is how one can do the same. DECLARE @t TABLE ( GuidCol UNIQUEIDENTIFIER DEFAULT newsequentialid() ROWGUIDCOL, IDENTITYCL INT IDENTITY(1,1), data VARCHAR(60) ) INSERT INTO @t (data) SELECT 'test' INSERT INTO @t (data) SELECT 'test1' SELECT $rowguid,$IDENTITY FROM @t There are alternate ways also to find an identity column in the database as well. Following query will give a list of all column names with their corresponding tablename. SELECT SCHEMA_NAME(so.schema_id) SchemaName, so.name TableName, sc.name ColumnName FROM sys.objects so INNER JOIN sys.columns sc ON so.OBJECT_ID = sc.OBJECT_ID AND sc.is_identity = 1 Let me know if you use any alternate method related to identity, I would like to know what you do and how you do when you have to deal with Identity Column. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Permission denied on /dev/xvdf despite sudo?

    - by Sid
    I'm trying to get a stream off port 9999 and write to /dev/xvdf. I'm using Amazon EC2 with the Ubuntu 11.10 server image. By default I log in as 'ubuntu' which has sudo privileges. However, when I run the following netcat command, I get this error ubuntu@ip-10-252-35-122:~$ ls -al /dev/xv* brw-rw---- 1 root disk 202, 1 2011-11-30 22:22 /dev/xvda1 brw-rw---- 1 root disk 202, 80 2011-11-30 22:27 /dev/xvdf ubuntu@ip-10-252-35-122:~$ sudo netcat -p 9999 -l > /dev/xvdf bash: /dev/xvdf: Permission denied ubuntu@ip-10-252-35-122:~$ Any idea why I get the permission denied error and how I can work around it? Update: Something mysterious is running in the background that resets the permissions? Check the snippet below and the permission flags seem to automatically reset when I try using /dev/xvdf !?! ubuntu@ip-10-252-35-122:~$ sudo chmod 777 /dev/xvdf ubuntu@ip-10-252-35-122:~$ ls -al /dev/xvdf brwxrwxrwx 1 root disk 202, 80 2011-11-30 22:43 /dev/xvdf ubuntu@ip-10-252-35-122:~$ sudo nc -p 9999 -l > /dev/xvdf This is nc from the netcat-openbsd package. An alternative nc is available in the netcat-traditional package. usage: nc [-46DdhklnrStUuvzC] [-i interval] [-P proxy_username] [-p source_port] [-s source_ip_address] [-T ToS] [-w timeout] [-X proxy_protocol] [-x proxy_address[:port]] [hostname] [port[s]] ubuntu@ip-10-252-35-122:~$ ls -al /dev/xvdf brw-rw---- 1 root disk 202, 80 2011-11-30 22:43 /dev/xvdf ubuntu@ip-10-252-35-122:~$ I'm using stock Ubuntu Amazon EC2 images from http://alestic.com/ (links to images at the top)

    Read the article

  • Can not boot CentOS VM using VirtIO in KVM

    - by Jake
    I converted qcow2 image to raw and changed I/O bus to VirtIO for a VM. now I can't boot that VM. I Installed VirtIO driver with following command: mkinitrd --with virtio_pci --with virtio_blk -f /boot/initrd-$(uname -r).img $(uname -r) and these are related kernel modules: virtio_balloon 11329 0 virtio_blk 11593 3 virtio_pci 11845 0 virtio_ring 8513 1 virtio_pci virtio 9541 3 virtio_balloon,virtio_blk,virtio_pci and this is what happens during boot-up. I also changed /boot/grub/device.map from "(hd0) /dev/sda" to "(hd0) /dev/vda" but problem still exists. any ideas how to fix this ? This is my default option to boot: title CentOS (2.6.18-308.13.1.el5) root (hd0,0) kernel /vmlinuz-2.6.18-308.13.1.el5 ro root=/dev/VolGroup00/LogVol00 initrd /initrd-2.6.18-308.13.1.el5.img

    Read the article

  • Warning message in boot.ini

    - by MA1
    Hi everyone, I have a dual boot system with Windows XP Pro and Windows 7. Following are the contents of my system's boot.ini. ;Warning: Boot.ini is used on Windows XP and earlier operating systems. ;Warning: Use BCDEDIT.exe to modify Windows Vista boot options. ; [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Professional" /NOEXECUTE=OPTIN /FASTDETECT I just want to know about first two warning lines, whether these two lines are always present in dual boot system when the boot process is different for installed operating systems, for example xp + vista/w7 or windows2000 + vista/w7 etc? Regards,

    Read the article

  • Focus follows mouse stops working when opening window from launcher and no click to focus

    - by user97600
    This is 12.04 default desktop (unity). I set it to focus follows mouse, and changed the menus to be on the window. This worked for a while, then some unknown even, maybe an upgrade maybe some other setting change caused it to stop working. There are many ways for this behavior to start but one reliable one is to bring a window to the foreground/focus with the launcher. Now the focus is stuck on that window and not just the window but the regions within the window so the close, maximize, minimize and menus do not work. I have to use mouse middle and then mouse right and then focus follows mouse is restored for a bit. The exact details of the mouse action aren't clear, sometimes it seems like just mouse middle helps, sometimes just right some times a desperate sequence of clicks :-( I have tried switching to the gnome desktop and it seems to occur less there but it is not eliminated. I have tried switching mice to an old wired USB mouse. I have tried creating a new account and that has not worked. I have observed "split focus" where to scroll button scrolls one one window but the input goes to another. I go trapped recently where my keyboard input went to libre office calc, but I was selecting the search term in the chrome address window. The selection "grayed" but the keyboard input for the search went to libre. Regions in windows have very confused focus. I have to work hard to get focus on for example the close gliph (X) or the minimize gliph (_).

    Read the article

  • Getting jerky backgrounds on Win7 on iMac 27'' 2560x1440

    - by JohnIdol
    I installed Win7 on bootcamp on my new iMac 27'' (ATI videocard) and everything was good until recently I noticed that the default win7 background (then one on the background on login) looked jerky. When I say jerky I mean the kind of jerky you get if you can't display enough colours, and instead of nice fading shades you just get stripes and jerky patterns. I am on native resolution but even if I go down to 1920x1080 I get the same. This might have happened after a firmware update but as I don't use windows very often I am not too sure it's what caused it. Oh, and when I am playing games everything looks OK (as in not jerky!). Any help appreciated!

    Read the article

  • IdentityServer Beta 1 Refresh &amp; Windows Azure Support

    - by Your DisplayName here!
    I just uploaded two new releases to Codeplex. IdentityServer B1 refresh A number of bug fixes and streamlined extensibility interfaces. Mostly a result of adding the Windows Azure support. Nothing has changed with regards to setup. Make sure you watch the intro video on the Codeplex site. IdentityServer B1 (Windows Azure Edition) I have now implemented all repositories for Windows Azure backed data storage. The default setup assumes you use the ASP.NET SQL membership provider database in SQL Azure and Table Storage for relying party, client certificates and delegation settings. The setup is simple: Upload your SSL/signing certificate via the portal Adjust the .cscfg file – you need to insert your storage account, certificate thumbprint and distinguished name There is a setup tool that can automatically insert the certificate distinguished names into your config file Adjust the connection string for the membership database in WebSite\configuration\connectionString.config Deploy Feedback Feature-wise this looks like the V1 release to me. It would be great if you could give me feedback, when you find a bug etc. – especially: Do the built-in repository implementations work for you (both on-premise and Azure)? Are the repository interfaces enough to add you own data store or feature?

    Read the article

  • Preview Chitika Premium Ads On Your Website Quickly

    - by Gopinath
    Google AdSense is an excellent option for publishers like us to monetize traffic. As Google AdSense allow only 3 ad units per page, we have good amount of space left empty on the blog. Why not we use this empty space to earn some revenue(make sure that you are not annoying your visitors with too many ads)? On Tech Dreams today we started experimenting with Chitika Premium Ads to displays advertisements for visitors landing on us through search engines. Chitika Premium Ads are displayed only to US visitors who finds our pages through search engines. Visitors from outside USA does not see these ads anywhere on our site. We being in India, how to preview the Chitika ads on our site? To preview Chitika ads add #chitikatest at the end of the url. For example to preview the ads on Tech Dreams I use the url http://techdreams.org/#chitikatest The above url displays default list of ads Chitika displays. But if you want to see preview of ads for a specific keyword you can append it at the end of the url. Here is another example http://www.techdreams.org/#chitikatest=ipad   Do You Know What The Word “Chitika” Means? What does Chitika mean? When Chitika co-founders, Venkat Kolluri and Alden DoRosario left Lycos in 2003 to start their own company, they sought a name that would suggest the speed with which its customers would be able to put up ads on their Web sites. Chitika, which means “snap of the fingers” in Telugu (a South Indian language), captured this sentiment and Chitika Inc. was born (via) This article titled,Preview Chitika Premium Ads On Your Website Quickly, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • what could cause a script to fail to find python when it has `#!/usr/bin/env python` in the first line?

    - by jcollum
    Trying to get casperjs running on Ubuntu 12.04. After installing it when I run I get: 09:20 $ ll /usr/local/bin/casperjs lrwxrwxrwx 1 root root 26 Nov 6 16:49 /usr/local/bin/casperjs -> /opt/casperjs/bin/casperjs 09:20 $ /usr/bin/env python --version Python 2.7.3 09:20 $ cat /opt/casperjs/bin/casperjs | head -4 #!/usr/bin/env python import os import sys 09:20 $ casperjs : No such file or directory 09: 22 $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 So Python is present and runnable, casperjs is pointing to the right place and it is a python script. But when I run it I get "No such file". I can fix it by changing the first line of the casperjs python file from: #!/usr/bin/env python to: #!/usr/bin/python Result: $ casperjs --version 1.1.0-DEV I managed to fix it, but I'm wondering why it didn't work with #!/usr/bin/env python, since that seems to be a normal interpreter line. Do I have something configured wrong? Here are the steps to get casperjs: $ git clone git://github.com/n1k0/casperjs.git $ cd casperjs $ ln -sf `pwd`/bin/casperjs /usr/local/bin/casperjs $ casperjs : No such file or directory

    Read the article

  • Task Manager: VM Size smaller than Mem usage?

    - by shoosh
    The windows XP tasks manager can show two different columns regarding the memory usage of the processes. One is called Mem Usage and the other is VM Size (not on by default, you need to activate it) From what I've gathered, VM size is the size of the entire memory space occupied by the process and Mem Usage is the amount of memory currently committed and used. This assumption is verified by most processes when the VM Size is only slightly larger than Mem Usage for instance my Outlook currently has 79,724 K in VM Size and 56,600 K in Mem Usage But it fails for other processes such as Firefox which currently has 171,900 K for Mem Usage and only 156,440 K in VM Size. How can a process use more memory than the amount of virtual memory allocated to it? So Maybe my interpretation of these columns is wrong. What do they actually mean?

    Read the article

  • It's called College.

    - by jeffreyabecker
    Today I saw yet another 'GUID vs int as your primary key' article. Like most of the ones I've read this was filled with technical misrepresentations and out-right fallices. Chef's famous line that "There's a time and a place for everything children" applies here. GUIDs have distinct advantages and disadvantages which should be considered when choosing a data type for the primary key. Fallacy 1: "Its easier" An integer data type(tinyint, smallint, int, bigint) is a better artifical key than a GUID because its easier to remember. I'm a firm believer that your artifical primary keys should be opaque gibberish. PK's are an implementation detail which should never be exposed to the user or relied on for business logic. If you want things to come back in an order, add and ORDER BY clause and SortOrder fields. If you want a human-usable look-up add a business key with a unique constraint. If you want to know what order things were inserted into a table add a timestamp. Fallacy 2: "Size Matters" For many applications, the size of the artifical primary key is going to be irrelevant. The particular article which kicked this post off stated repeatedly that joining against an int has better performance than joining against a GUID. In computer science the performance of your algorithm is always a function of the number of data points. This still holds true for databases. Unless your table is very large, the performance difference between an int and a guid probably isnt going to be mesurable let alone noticeable. My personal experience is that the performance becomes an issue when you start having billions of rows in the table. At this point, you should probably start looking to move from int to bigint so the effective space/performance gain isnt as much as you'd think. GUID Advantages: Insert-ability / Mergeability: You can reliably insert guids into tables without key collisions. Database Independence: Saving entities to the database often requires knowing ids. With identity based ids the id must be selected back after every insert. GUIDs can be generated application-side allowing much faster inserts. GUID Disadvantages: Generatability: You can calculate the next id for an integer pk pretty easily in your head but will need a program to generate GUIDs. Solution: "Select top 100 newid() from sysobjects" Fragmentation: most GUID generation algorithms generate pseudo random GUIDs. This can cause inserts into the middle of your clustered index. Solutions: add a default of newsequentialid() or use GuidComb in NHibernate.

    Read the article

  • Sharepoint 2007: Edit vs Read Only Mode

    - by user29116
    Sorry about the title, dont' really know what it should be. If I open a doc in read only mode I'm able to press save and then it opens up a save as box and the default directory is the directory on the sharepoint server and if you press save you save it to the server. This actually makes the whole process not really "read only" mode since I could actually update the document. Is there a way to prevent this from happening so that if someone chooses read only there is no way possible to updload any changes back to the sharepoint site? Also, it has been suggested as a solution to get rid of the edit/read only option so that people have to check out the document. Is there a way to remove the edit/read only option on documents?

    Read the article

  • Creating self-signed SSL on IIS - Remote access problem

    - by ile
    I followed these instructions to create self-signed ssl: http://www.visualwin.com/SelfSSL/ (I opened SelfSSL and typed selfssl /T) When I access https: //localhost/ than it works, but when I try to access it remotely (i set up my router to port forward to localhost), for example https: //myip the page does not load. Also, I noticed one other thing. When I access localhost locally then I am asked to enter user/pass, but if I access remotely the I get the following warning: Under Construction The site you were trying to reach does not currently have a default page. It may be in the process of being upgraded and configured. ... I don't know if it is related with this but I hope someone know the answer. Thanks, Ile

    Read the article

  • Issue with PHP and osx 10.7 - runs via command line but not in browser

    - by jnolte
    I recently removed MAMP as I wanted to have more control over my machine and wanted to make use of PHP5.4 I installed using the script located here I cannot now not even get my default PHP that is built in to osx to work. I am running this script with a simple In a document in my ~/Sites directory. I am really at a loss as to why this will not work. I have php5 installed in my /usr/local directory via the link provided above and it seems like the main php is installed in /usr/bin Any and all insight on how to debug this would be greatly appreciated.

    Read the article

  • Ngingx wont start with fastcgi_split_path_info" error

    - by Ke
    Hi, I heard that nginx is faster and since im on a VPS with low ram i thought id try it out. I got through this tutorial http://www.howtoforge.com/installing-php-5.3-nginx-and-php-fpm-on-ubuntu-debian But I now get the following error: unknown directive "fastcgi_split_path_info" in /etc/nginx/sites-enabled/default:28 Anyone know what might be causing the problem? I cant find any reference to the problem on Google Also I have heard conflicting things about Nginx vs Apache. Some say use one, some say the other. Im using allsorts such as rewrite rules, proxies etc. Am I setting myself up for a fall by using Nginx? If I go for apache, does anyone know of anyway to tweak it so that it performs better on a low ram VPS? Cheers Ke

    Read the article

  • Ngingx wont start with fastcgi_split_path_info" error

    - by Ke
    Hi, I heard that nginx is faster and since im on a VPS with low ram i thought id try it out. I got through this tutorial http://www.howtoforge.com/installing-php-5.3-nginx-and-php-fpm-on-ubuntu-debian But I now get the following error: unknown directive "fastcgi_split_path_info" in /etc/nginx/sites-enabled/default:28 Anyone know what might be causing the problem? I cant find any reference to the problem on Google Also I have heard conflicting things about Nginx vs Apache. Some say use one, some say the other. Im using allsorts such as rewrite rules, proxies etc. Am I setting myself up for a fall by using Nginx? If I go for apache, does anyone know of anyway to tweak it so that it performs better on a low ram VPS? Cheers Ke

    Read the article

  • OBIEE 11.1.1 - How to enable HTTP compression and caching in Oracle iPlanet Web Server

    - by Ahmed Awan
    1. To implement HTTP compression / caching, install and configure Oracle iPlanet Web Server 7.0.x for the bi_serverN Managed Servers (refer to document http://docs.oracle.com/cd/E23943_01/web.1111/e16435/iplanet.htm) 2. On the Oracle iPlanet Web Server machine, open the file Administrator's Configuration (obj.conf) for editing. (Guidelines for modifying the obj.conf file is available at http://download.oracle.com/docs/cd/E19146-01/821-1827/821-1827.pdf) 3. Add the following lines in obj.conf file inside <Object name="default"> . </Object> and restart the Oracle iPlanet Web Server machine: #HTTP Caching <If $path =~ '^(.*)\.(jpg|jpeg|gif|png|css|js)$'> ObjectType fn="set-variable" insert-srvhdrs="Expires:$(httpdate($time + 864000))" </If>   <If $path =~ '^(.*)\.(jpg|jpeg|gif|png|css|js)$'> PathCheck fn="set-cache-control" control="public,max-age=864000" </If>   #HTTP Compression   Output fn="insert-filter" filter="http-compression" vary="false" compression-level="9" fragment_size="8096"

    Read the article

  • Basic web architecture : Perl -> PHP

    - by Sunny Jim
    This is an architecture question. If there is a better forum, please redirect me. Apologies in advance. Essentially every website is built around a relational database, right? When a user uploads form data, that data is stored in a table. The problem is that the table structure(s) need to be modified whenever the website form is modified. Although I understand that modern web frameworks work around this problem by automatically building forms based on the table structure. For the last 20 years, I have been building websites using Perl. When I first encountered this problem, the easiest solution was to save serialized Perl objects as data BLOBS. After XML's introduction, this solution worked even better because XML is so effective for representing arbitrary data. This approach is consistent with the original Perl principles of Hubris, Laziness, and Impatience and I'm pretty committed to it. Obviously, the biggest drawback is that this solution locks me into the Perl interpreter. So instead, I've just completed a prototype of a universal RDB table. The prototype is written in Perl but porting it to PHP will be a good chance to develop those skills. The principal is based on the XML::Dumper module, which converts arbitrary Perl data structures into uniform XML. With my approach, each XML node is stored as a table record. I underestimated this undertaking and rolled something up myself. But the effort allows me to discuss the basic design instead of implementation details. As mentioned, I'm pretty committed to this approach of using flexible data structures. It's been successfully deployed on many websites, large, and complex. But are there any drawbacks I've overlooked? I rolled my own. Are other people taking a similar approach to their data? What kinds of solutions are available? I have not abandoned my dream of eventually contributing something useful to the worldwide community. In order to proceed, the next step would be peer review. How does one pursue that effort? Thanks! -Jim

    Read the article

  • How to publishing access DB to https SharePoint2010 site with self-signed certificate

    - by ybbest
    If you are having troubles (shown below) when you publish your access database to https SharePoint2010 site with self-signed certificate. Problem: First you are getting a warning see the screenshot below: And then getting the error message: Solution: The error “The name of the security certificate is invalid or does not match the name of the site” comes when the ‘common name’ in the certificate doesn’t match the address you provided in browser to access the site. To fix the problem , you need to use script to generate the certificate rather than using the IIS UI, this is because it will default the common name to the server name and you will have the above problem when using that certificate to a different host-name web application. You can use SelfSSl.exe (IIS 6.0 only), you have to specify common name(cn), for example as: selfssl.exe /T /N:cn=testsharepoint.com /K:1024 /V:7 /S:1 /P:443 OR you can use makecert (IIS7.0 and above) makecert -r -pe -n 'CN=my.domain.here' -b 01/01/2000 -e 01/01/2036 -eku 1.3.6.1.5.5.7.3.1 -ss my -sr localMachine -sky exchange -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 After you have created the certificate, you then need to add that self-signed certificate to your IIS web site and to the Trusted Root Certification Authorities. (To get to there, Key-in Windows + R and Type mmc.exe and add the certifications console) I have compiled the solution from the questions I have asked in sharepointstackexchange

    Read the article

< Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >