Search Results

Search found 10622 results on 425 pages for 'shared hosting'.

Page 326/425 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Can my machine run Ubuntu( kubuntu | xubuntu | mint ) 12.04 WELL?

    - by Steve
    I have a 9 year old computer packing the hardware listed below. My question is, can I run 12.04 ( Ubuntu, Kubuntu, MINT or Xubuntu ) WELL? I was running Ubuntu 10.10 and I upgraded to 12.04 by going through each release via the update manager: 11.04 - 11.10 -12.04 During the installation process for 12.04 I saw an error message that there was an error installing and setting up part of the kernel. Later, when I tried installing a package in synaptic, I got another error message mentioning the kernel. When I rebooted, I got told somehting about my video and graphics was not configured properly and that I would have to do it manually ( like I know ). It gave me the option to enter the system in low graphics mode, but it just hanged. I had an old livecd of Xubuntu 10.10 around so I used that to get into my computer and copy data over to an external hard drive. I think tried to install Xubuntu 10.10 from the livecd, with the option "download updates" checked. The install process moved along a bit, then halted for about 5 hours. I rebooted my machine and tried the Xubuntu 10.10 installer WITHOUT the option to "download updates". The install completed in about 15 minutes. So, all of that is making me wonder if there is someting about 12.04 that does not like my hardware. I'm willing to try again, but only if I know I will not have to spend hours just to get to an error message and a hosed up system like I did last night. I also think I have a lot more RAM than is being reported in the output below. I had extra ram installed last year. I'm not good with the command line readouts, but there seems like there should be a lot more. I wasn't thrilled with Unity. I am willing to try Kubuntu 12.04. Will I run into the same problems? What is the highest version of a *ubuntu can I upgrade to? Thanks CPU Model: Intel(R) Pentium(R) 4 CPU 2.53GHz Frequency: 2533.223 MHz L2 Cache: 512 KB Bogomips: 5066.44 Numbering: family(15) model(2) stepping(7) Flags: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe up pebs bts cid RAM ~$ free -mt total used free shared buffers cached Mem: 1506 891 615 0 91 521 -/+ buffers/cache: 278 1227 Swap: 1609 0 1609 Total: 3116 891 2225 Video Card 01:00.0 VGA compatible controller: nVidia Corporation NV18 [GeForce4 MX 440 AGP 8x] (rev a2) (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. V9180 Magic Flags: bus master, 66MHz, medium devsel, latency 32, IRQ 16 Memory at fd000000 (32-bit, non-prefetchable) [size=16M] Memory at f0000000 (32-bit, prefetchable) [size=64M] Expansion ROM at fe9e0000 [disabled] [size=128K] Kernel driver in use: nouveau Kernel modules: nouveau, nvidiafb Motherboard Intel 845PE ATX 533FSB DDR333 USB2

    Read the article

  • The term "interface" in C++

    - by Flexo
    Java makes a clear distinction between class and interface. (I believe C# does also, but I have no experience with it). When writing C++ however there is no language enforced distinction between class and interface. Consequently I've always viewed interface as a workaround for the lack of multiple inheritance in Java. Making such a distinction feels arbitrary and meaningless in C++. I've always tended to go with the "write things in the most obvious way" approach, so if in C++ I've got what might be called an interface in Java, e.g.: class Foo { public: virtual void doStuff() = 0; ~Foo() = 0; }; and I then decided that most implementers of Foo wanted to share some common functionality I would probably write: class Foo { public: virtual void doStuff() = 0; ~Foo() {} protected: // If it needs this to do its thing: int internalHelperThing(int); // Or if it doesn't need the this pointer: static int someOtherHelper(int); }; Which then makes this not an interface in the Java sense anymore. Instead C++ has two important concepts, related to the same underlying inheritance problem: virtual inhertiance Classes with no member variables can occupy no extra space when used as a base "Base class subobjects may have zero size" Reference Of those I try to avoid #1 wherever possible - it's rare to encounter a scenario where that genuinely is the "cleanest" design. #2 is however a subtle, but important difference between my understanding of the term "interface" and the C++ language features. As a result of this I currently (almost) never refer to things as "interfaces" in C++ and talk in terms of base classes and their sizes. I would say that in the context of C++ "interface" is a misnomer. It has come to my attention though that not many people make such a distinction. Do I stand to lose anything by allowing (e.g. protected) non-virtual functions to exist within an "interface" in C++? (My feeling is the exactly the opposite - a more natural location for shared code) Is the term "interface" meaningful in C++ - does it imply only pure virtual or would it be fair to call C++ classes with no member variables an interface still?

    Read the article

  • Cannot reactivate RAID-5 volume: The size of the plex member is invalid

    - by Ian Boyd
    We had a 3-drive Windows Server 2008 R2 RAID-5 fail (operating in redundancy mode): WDC 1 TB WDC 1 TB WDC 1 TB We removed the failed hard drive, and put a WDC 1 TB drive (that we had standing by) into the machine. When launched, Disk Manager, asked permission to "initialize" the disk as either: Master Boot Record (MBR) Guid Partition Table (GPT) We initialized the disk as GPT, converted it to dynamic, and tried to use the Repair Volume command - except it was greyed out. (which is a terrifying thing on a failed production server hosting 3 virtual servers) i tried from the diskpart command line tool. First we look for our RAID-5 volume that is in Failed Rd mode: DISKPART> list volume Volume ### Ltr Label Fs Type Size Status Info ---------- --- ----------- ----- ---------- ------- --------- -------- Volume 0 E VMs (Raid5) NTFS RAID-5 1863 GB Failed Rd Volume 1 D DVD-ROM 0 B No Media Volume 2 System Rese NTFS Partition 100 MB Healthy System Volume 3 C NTFS Partition 1862 GB Healthy Boot There, Volume 0. Make that our active context: DISKPART> select volume 0 Volume 0 is the selected volume. Now we need to find the disk we will be repairing the volume with: DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 931 GB 0 B * Disk 1 Online 931 GB 931 GB * Disk 2 Online 1863 GB 0 B Disk 3 Online 931 GB 0 B * Disk M0 Missing 0 B 0 B * The disk with 931 GB free, Disk 1. Now we just need to repair the volume: DISKPART> repair disk=1 Virtual Disk Service error: The size of the plex member is invalid.

    Read the article

  • Mounting an Azure blob container in a Linux VM Role

    - by djechelon
    I previously asked a question about this topic but now I prefer to rewrite it from scratch because I was very confused back then. I currently have a Linux XS VM Role in Azure. I basically want to create a self-managed and evoluted hosting service using VMs rather than Azure's more-expensive Web Roles. I also want to take advantage of load balancing (between VM Roles) and geo-replication (of Storage Roles), making sure that the "web files" of customers are located in a defined and manageable place. One way I found to "mount" a drive in Linux VM is described here and involves mounting a VHD onto the virtual machine. From what I could learn, the VHD is reliably-stored in a storage role, and is exclusively locked by the VM that uses it. Once the VM Role has its drive I can format the partition to any size I want. I don't want that!! I would like each hosted site to have its own blob directory, then each replicated/load-balanced VM Role to rw mount like in NFS that blob directory to read HTML and script files. The database is obviously courtesy of Microsoft :) My question is Is it possible to actually mount a blob storage into a directory in the Linux FS? Is it possible in Windows Server 2008?

    Read the article

  • CA SiteMinder Configuration for Ubuntu

    - by Matt Franklin
    I receive the following error when attempting to start apache through the init.d script: *apache2: Syntax error on line 186 of /etc/apache2/apache2.conf: Syntax error on line 4 of /etc/apache2/mods-enabled/auth_sm.conf: Cannot load /apps/netegrity/webagent/bin/libmod_sm22.so into server: libsmerrlog.so: cannot open shared object file: No such file or directory* SiteMinder does not officially support Ubuntu, so I am having trouble finding any configuration documentation to help me troubleshoot this issue. I successfully installed the SiteMinder binaries and registered the trusted host with the server, but I am having trouble getting the apache mod to load correctly. I have added the following lines to a new auth_sm.conf file in /etc/apache2/mods-available and symlinked to it in /etc/apache2/mods-enabled: SetEnv LD_LIBRARY_PATH /apps/netegrity/webagent/bin SetEnv PATH ${PATH}:${LD_LIBRARY_PATH} LoadModule sm_module /apps/netegrity/webagent/bin/libmod_sm22.so SmInitFile "/etc/apache2/WebAgent.conf" Alias /siteminderagent/pwcgi/ "/apps/netegrity/webagent/pw/" <Directory "/apps/netegrity/webagent/pw/"> Options Indexes MultiViews ExecCGI AllowOverride None Order allow,deny Allow from all </Directory> UPDATE: Output of ldd libmod_sm22.so: ldd /apps/netegrity/webagent/bin/libmod_sm22.so linux-gate.so.1 = (0xb8075000) libsmerrlog.so = /apps/netegrity/webagent/bin/libsmerrlog.so (0xb7ec0000) libsmeventlog.so = /apps/netegrity/webagent/bin/libsmeventlog.so (0xb7ebb000) libpthread.so.0 = /lib/tls/i686/cmov/libpthread.so.0 (0xb7e9a000) libdl.so.2 = /lib/tls/i686/cmov/libdl.so.2 (0xb7e96000) librt.so.1 = /lib/tls/i686/cmov/librt.so.1 (0xb7e8d000) libstdc++.so.5 = /usr/lib/libstdc++.so.5 (0xb7dd3000) libm.so.6 = /lib/tls/i686/cmov/libm.so.6 (0xb7dad000) libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb7d9e000) libc.so.6 = /lib/tls/i686/cmov/libc.so.6 (0xb7c3a000) libsmcommonutil.so = /apps/netegrity/webagent/bin/libsmcommonutil.so (0xb7c37000) /lib/ld-linux.so.2 (0xb8076000) UPDATE: The easiest way to set environment variables for the Apache run user in Ubuntu is to edit the /etc/apache2/envvars file and add export statements for any library paths you may need

    Read the article

  • Using unixODBC to connect to Oracle server

    - by Paul
    I am trying to configure our web server (RHEL 5.4 x86) to connect to an Oracle database using unixODBC. I have installed unixODBC-2.2.11-7.1.1, which yum tells me is the latest version. I have also installed the Oracle InstantClient 11.2 and the Oracle InstantClient ODBC library. I have symlinked the all the .so files in /usr/lib/oracle/11.2/client/lib to /usr/lib. I have set $LD_LIBRARY_PATH to /usr/lib/, $ORACLE_HOME to /usr/lib/oracle and $TNS_ADMIN to the directory containing my (valid) Tnsnames.ora file. Here are the contents of my /etc/odbcinst.ini file: [Oracle] Description = Oracle ODBC Connection Driver = /usr/lib/libsqora.so.11.1 Setup = FileUsage = and my /etc/odbc.ini file: [Oracle] Application Attributes = T Attributes = W BatchAutocommitMode = IfAllSuccessful CloseCursor = F DisableDPM = F DisableMTS = T Driver = Oracle EXECSchemaOpt = EXECSyntax = T Failover = T FailoverDelay = 10 FailoverRetryCount = 10 FetchBufferSize = 64000 ForceWCHAR = F Lobs = T Longs = T MetadataIdDefault = F QueryTimeout = T ResultSets = T ServerName = //<host>:<port>/<db> SQLGetData extensions = F Translation DLL = Translation Option = 0 UserID = (ServerName has been edited...host, port, and db are actually there, and correct) When I run isql I get $ isql -v Oracle isql: symbol lookup error: /usr/lib/libsqora.so.11.1: undefined symbol: SQLGetPrivateProfileStringW And running dltest gives me $ dltest Oracle SQLConnect [dltest] ERROR dlopen: Oracle: cannot open shared object file: No such file or directory If anyone has any insights I would be grateful, I've been trying to get this to connect for about 5 hours now... I am going home for the night, but will gladly provide more details, if necessary, tomorrow morning, to anyone willing to help...

    Read the article

  • SQL Server 2008 Cluster Installation - First network name always fails

    - by boflynn
    I'm testing failover clustering in Windows Server 2008 to host a SQL Server 2008 installation using this installation guide. My base cluster is installed and working properly, as well as clustering the DTC service. However, when it comes time to install SQL Server, my first attempt at installation always fails with the same message and seems to "taint" the network name. For example, with my previous cluster attempt, I was installing SQL Server as VSQL. After approximately 15 attempts of installation and trying to resolve the errors, e.g. changing domain accounts for SQL, setting SPNs, etc., I typoed the network name as VQSL and the installation worked. Similarly on my current cluster, I tried installing with the SQL service named PROD-C1-DB and got the same errors as last time until I tried changing the name to anything else, e.g. PROD-C1-DB1, SQL, TEST, etc., at which point the install works. It will even install to VSQL now. While testing, my install routine was: Run setup.exe from patched media, selecting appropriate options After the install fails, I'd chose "Remove node from a SQL Server failover cluster" and remove the single, failed, node Attempt to diagnose problem, inspect event logs, etc. Delete the computer account that was created for the SQL Service from Active Directory Delete the MSSQL10.MSSQLSERVER folder from the shared data drive The error message I receive from the SQL Server installer is: The following error has occurred: The cluster resource 'SQL Server' could not be brought online. Error: The group or resource is not in the correct state to perform the requested operation. (Exception from HRESULT: 0x8007139F) Along with hundreds of the following errors in the Application event log: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. System configuration notes: Windows Server 2008 Enterprise Edition x64 SQL Server 2008 Enterprise Edition x64 using slipstreamed SP1+CU1 media Dell PowerEdge servers Fibre attached storage

    Read the article

  • Setting up Tomcat6 properly in Ubuntu 10.04

    - by aasukisuki
    We have a Tomcat6 instance running on Ubuntu 10.04LTS. Our test box was just a Windows machine running Tomcat6. Both machines (Linux and Windows) have 1GB of ram. Via the Tomcat configuration tool in windows, I was able to set the min/max/permgen sizes of the JVM. Those were set to 256/512/128 respectively. Now on the Ubuntu box, I've tried setting the JVM options in several different places including: Adding JAVA_OPTS & CATALINA_OPTS in /etc/environment Adding JAVA_OPTS in $CATALINA_HOME/bin/catalina.sh Creating setenv.sh and adding JAVA_OPTS in $CATALINA_HOME/bin Adding JAVA_OPTS directly to /etc/init.d/tomcat6 Un-commenting the JAVA_OPTS and modifying it in /etc/default/tomcat6 Nearly all of those methods did not work, except for modifying /etc/init.d/tomcat6 directly (and possibly the /etc/default/tomcat6 change, but I just did that). However, my understanding is that when you change these settings, only one JVM should be used for the entire tomcat6 instance, and that memory is shared among the applications. On our windows box, tomcat6 is run as a service, and appears to behave this way. However, when I look at htop on the linux box, there are 20+ tomcat6 instances (I have an app that triggers internal jobs every X seconds using chron, so maybe these are threads? Or are they actual instances) all with those memory settings. The app runs fine for a bit, but eventually ends up locking up. I'm guessing each of these apps thinks it has 512m to work with and never GC's and then locks tomcat up completely. What is the proper way to set all of this up?

    Read the article

  • Trying to run an ASP.NET MVC application using Mono on Apache with FastCGI.

    - by Arda Xi
    I have a hosting account with DreamHost, and I would like to use the same account to run ASP.NET applications. I have an application deployed in a subdomain, a .htaccess with a handler like this: # Define the FastCGI Mono launcher as an Apache handler and let # it manage this web-application (its files and subdirectories) SetHandler monoWrapper Action monoWrapper /home/arienh4/<domain>/cgi-bin/mono.fcgi virtual My mono.fcgi is set up as such: #!/bin/sh #umask 0077 exec >>/home/arienh4/tmp/mono-fcgi.log exec 2>>/home/arienh4/tmp/mono-fcgi.err echo $(date +"[%F %T]") Starting fastcgi-mono-server2 cd / chmod 0700 /home/arienh4/tmp/mono-fcgi.sock echo $$>/home/arienh4/tmp/mono-fcgi.pid # stdin is the socket handle export PATH="/home/arienh4/mono/bin:$PATH" export LD_LIBRARY_PATH="/home/arienh4/mono/lib:$LD_LIBRARY_PATH" export TMP="/home/arienh4/tmp" export MONO_SHARED_DIR="/home/arienh4/tmp" exec /home/arienh4/mono/bin/mono /home/arienh4/mono/lib/mono/2.0/fastcgi-mono-server2.exe \ /logfile=/home/arienh4/logs/fastcgi-mono-web.log /loglevels=All \ /applications=/:/home/arienh4/<domain> I took this from the Mono site for CGI, I'm not sure if I'm doing it correctly though. This code is resulting in this error: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. I have no idea what's causing this. As far as I can see, Mono isn't even hit (no log files are created).

    Read the article

  • therubyracer on Ubuntu install issues with ruby-2.0.0-p247

    - by Victor S
    I can't seem to get therubyracer gem installed on ubuntu, i get the following errors, anyone can help? ERROR: Error installing therubyracer: ERROR: Failed to build gem native extension. /home/victorstan/.rvm/rubies/ruby-2.0.0-p247/bin/ruby extconf.rb checking for main() in -lpthread... yes creating Makefile make compiling constraints.cc compiling array.cc compiling value.cc compiling invocation.cc compiling primitive.cc compiling trycatch.cc compiling context.cc compiling exception.cc compiling template.cc compiling accessor.cc compiling object.cc compiling script.cc compiling external.cc compiling stack.cc compiling gc.cc compiling backref.cc compiling heap.cc compiling v8.cc compiling constants.cc compiling date.cc compiling function.cc compiling rr.cc compiling message.cc compiling init.cc compiling string.cc compiling handles.cc compiling signature.cc compiling locker.cc linking shared-object v8/init.so g++: /home/victorstan/.rvm/gems/ruby-2.0.0-p247@global/gems/libv8-3.11.8.17-x86_64-linux/vendor/v8/out/ia32.release/obj.target/tools/gyp/libv8_base.a: No such file or directory g++: /home/victorstan/.rvm/gems/ruby-2.0.0-p247@global/gems/libv8-3.11.8.17-x86_64-linux/vendor/v8/out/ia32.release/obj.target/tools/gyp/libv8_snapshot.a: No such file or directory make: *** [init.so] Error 1 Update It seems it's trying to use a 32 bit library instead of a 64 bit library, anyone else experience issue trying to install 64 bit applications and packages on Ubuntu, and it trying to install 32 bit version instead (even though the system is a 64 Linux)? Related: https://github.com/cowboyd/therubyracer/issues/262

    Read the article

  • Apache: VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not sup

    - by user45761
    Hi, when i add the line below to /etc/apache2/apache2.conf I get the error belower when i restart apache: Include /usr/share/doc/apache2.2-common/examples/apache2/extra/httpd-vhosts.conf [Mon Jun 14 12:16:47 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Mon Jun 14 12:16:47 2010] [warn] NameVirtualHost *:80 has no VirtualHosts This is my httpd-vhosts.conf file: # # Use name-based virtual hosting. # NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. <VirtualHost *:80> ServerName tirengarfio.com DocumentRoot /var/www/rs3 <Directory /var/www/rs3> AllowOverride All Options MultiViews Indexes SymLinksIfOwnerMatch Allow from All </Directory> Alias /sf /var/www/rs3/lib/vendor/symfony/data/web/sf <Directory "/var/www/rs3/lib/vendor/symfony/data/web/sf"> AllowOverride All Allow from All </Directory> </VirtualHost> Any idea? Regards Javi

    Read the article

  • Apache: VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not sup

    - by user45761
    Hi, when i add the line below to /etc/apache2/apache2.conf I get the error belower when i restart apache: Include /usr/share/doc/apache2.2-common/examples/apache2/extra/httpd-vhosts.conf [Mon Jun 14 12:16:47 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Mon Jun 14 12:16:47 2010] [warn] NameVirtualHost *:80 has no VirtualHosts This is my httpd-vhosts.conf file: # # Use name-based virtual hosting. # NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. <VirtualHost *:80> ServerName tirengarfio.com DocumentRoot /var/www/rs3 <Directory /var/www/rs3> AllowOverride All Options MultiViews Indexes SymLinksIfOwnerMatch Allow from All </Directory> Alias /sf /var/www/rs3/lib/vendor/symfony/data/web/sf <Directory "/var/www/rs3/lib/vendor/symfony/data/web/sf"> AllowOverride All Allow from All </Directory> </VirtualHost> Any idea? Regards Javi

    Read the article

  • Connectivity with SQL Server Express 2008 r2 and SQL Server 2000 on same machine

    - by Jim R
    At first glance this may same a duplicate of Installing both SQL Server 2000 and SQL Server 2008 on the same machine, but it is not. I have SQL Server 2000 and SQL Server 2008 R2 installed on the same machine and working fine. My problem lies with connecting to the 2008 R2 server from a remote machine. My connectivity needs to be TCP. The legacy installation or SQL 2000 uses the default port of 1433. The named instance is by default configured to use 'Shared Memory' and is working fine. When I configured the 2008 R2 server to use 1433 (I did not think that thru) the service refused to start becasue 1433 was already in use by the legacy SQL 2000 default instance. Doh! What I want to do is have both servers available simultaneously via TCP. both servers need not be on the same port, put if I cannot run them on the same port, then how do I configure the clients? Is there not some kind of proxy available that can monitor the 1433 port and pass the request thru to the correct SQL instance by name? Is this capability built into SQL server already? Thanks, Jim

    Read the article

  • Changes made to cli's php.ini not taking effect

    - by Sandeepan Nath
    I have two php.ini files - /etc/php.ini which loads in case of cli /opt/lampp/etc/php.ini which loads in case of browser. I am able to use PHP's Mailparse extension after adding the line extension=mailparse.so in the /opt/lampp/etc/php.ini and restarting lampp. But I am not able to load the same in case of command line - getting PHP Fatal error: Call to undefined function mailparse_msg_create() in ... mailparse_msg_create() is a part of the Mailparse extension. I tried by relogging with the user after making the change and even restarting the system. What needs to be done so that the change takes effect. Update I checked that php -i | grep 'Configuration File' gives PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/mailparse.so' - /usr/lib/php/modules/mailparse.so: cannot open shared object file: No such file or directory in Unknown on line 0 Configuration File (php.ini) Path => /etc Loaded Configuration File => /etc/php.ini Update 2 I copied the mailparse.so from /opt/lampp/lib/php/extensions/no-debug-non-zts-20090626 and put it in /usr/lib/php/modules. I added extension=mailparse.so to /etc/php.ini as well. But it still showed this warning PHP Warning: PHP Startup: Unable to load dynamic library ... As told by Lekensteyn, I did ldd /usr/lib/php/modules/mailparse.so and got ldd: warning: you do not have execution permission for /usr/lib/php/modules/mailparse.so' So I gave execute permission. Then ldd /usr/lib/php/modules/mailparse.so showed linux-gate.so.1 => (0x00110000) libc.so.6 => /lib/libc.so.6 (0x0011d000) /lib/ld-linux.so.2 (0x003aa000) which looks normal. BUt now, running php command says PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/mailparse.so' - /usr/lib/php/modules/mailparse.so: undefined symbol: mbfl_name2no_encoding in Unknown on line 0

    Read the article

  • Windows Server 2008 / SQL 2008 Licensing for Authenticated Web Application

    - by MikeM
    Hello, I'm trying to crunch some numbers to see what the software costs involved are for hosting an application we are developing. Users will not be anonymous - they will need to log in. SQL Server 2008: SQL Server licensing is easy - it will be licensed per-processor. No real fuss there. The cost of CALs would be much higher for the number of users as compared to the processor licenses. Windows Server 2008: This is where it gets trickier. We need to license the OS for both the web servers (there will be a couple) plus the database servers (also a couple). The Web Servers could run on the Web Edition without a need for CALs, but if you continue reading, you will see that may not matter much because I will likely have user CALs for each user anyway. We can't use the "External Connector" for any of the Windows licenses, because that doesn't cover customers who are paying to access a hosted application. We can't use the Web Edition for the SQL Servers because that license only allows database running on Web Edition to host data for the local web application (i.e. other web servers can't connect to it). So that leaves us with the "full" editions of Windows Server for the database server OS. I find this a little rediculous, and I feel as though I must be missing something, but it looks to me like I will actually need to buy a CAL for every user who signs up to use our service. I feel like I'm missing something because that means that for every user, I have to shell out $40 for a CAL. That could be one or two years' worth of revenue from each user for an inexpensive service! Is there any way to serve a web application to authenticated users without paying for individual Windows Server CALs, if the web servers and SQL servers are seperate boxes?

    Read the article

  • How to manage sub-domains on WinHost with IIS7 URL Rewrite 2.0?

    - by jrummell
    I'm trying out WinHost and I'm running into some issues with sub-domains. On WinHost, you can have multiple sub-domains per hosting account, but each sub-domain points to the root website. E.g. you can have www.example.com, sub1.example.com, and sub2.example.com but all of them display the content at http://www.example.com/. Other Hosts allow you to point sub-domains to a sub folder in your website. This would allow you to point sub1.example.com to /sub1, sub2.example.com to /sub2 and www.example.com to /. WinHost recommends using an asp/aspx page to redirect http://sub1.example.com to http://sub1.example.com/sub1, which points to /sub1. While that would work, I'd like to not have the subdomain in the url twice. So I tried using IIS7 URL Rewrite to point http://sub1.example.com to /sub1. Ben Powell describes this in detail on his blog. This is great, except Request.ApplicationPath is now /sub1/path/to/current/page.aspx, which breaks ASP.Net Themes (and probably other stuff too). What can I do to fix the ApplicationPath? Is there a better way to accomplish this?

    Read the article

  • Executing Oracle SQLPlus in a Powershell Invoke-Command statement against a remote machine

    - by Scott Muc
    We have a basic powershell script that attempts to execute SQLPlus.exe on a remote machine. The remote does not have Oracle Instant client installed, but we have bundled all the necesary dlls in a remote folder. For example we have sqlplus.exe and dependencies in the directory C:\temp\oracle. If I navigate to that path on the remote server and execute sqlplus.exe it runs just fine. I get the prompt for username. If I go: Invoke-Command -comp remote.machine.host -ScriptBlock { C:\temp\oracle\sqplus.exe } I get the following: Error 57 initializing SQL*Plus + CategoryInfo : NotSpecified: (Error 57 initializing SQL*Plus:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Error loading message shared library Thinking that it's potentially a PATH issue I tried the following: Invoke-Command -comp remote.machine.host -ScriptBlock { $env:ORACLE_HOME= "C:\temp\oracle"; $env:PATH = "$env:ORACLE_HOME; C:\temp\oracle\sqlplus.exe } This had the same result. The error code is not very helpful and is extremely frustrating since it does work when I log on to the machine. What is powershell remoting doing that's making this not work?

    Read the article

  • Windows Server 2008 R2, SQL Server 2008 R2, and Registry Backups

    - by charliedigital
    Hi folks! As a developer, I've installed various instances of SQL Sever (2000, 2005, 2008, R2) from all the ways back to 2003 and I've never had an install fail on me....until yesterday. I was installing SQL Server 2008 R2 onto a Windows Server 2008 R2 hosted virtual server and the install finished, but failed on every component. To make matters worse, it was in a state in which I could not uninstall it either, even using command line options! I dug around a bit, but didn't get very far with it. The error is enigmatic and Google didn't turn up much hope. So today, I am going to try again after having the VM image wiped overnight. My question is how can I guard against the same failure today? I don't mind if it fails, but then I know it may be something wrong with the base image I'm getting from the hosting company. I really don't feel like paying another $20 to wipe the VM and I have no idea why it failed. Is it enough for me to backup the registry so that I can restore it in case it fails? What about the installation files? Do I need to have a tool to clean that out, too? Sorry, I'm no sys admin so no real experience with backup/restore aside from System Restore! So any advice would be appreciated!

    Read the article

  • Alternative to robocopy /MIR

    - by Robin Day
    We run a number of web apps that store a lot of local data in small xml files. One part of our backup / recovery strategy is to produce a local mirror of the file system via a VPN to the hosting centre. The VPN connection is only via a 12Mbps ADSL and whilst there are a lot of files and directories, the actual number of files that changes is quite small. Although the bandwidth is probably an issue, I'm seeing results such as the output below. The robocopy /MIR took 5 hours to run yet only 30 mins to actually perform the copy. Does anyone have any suggestions as to ways to improve this. The 5 hours is now bordering on too slow and if we can't find a way to speed this up then we're going to have to come up with a completely different solution. Total Copied Skipped Mismatch FAILED Extras Dirs : 17625 6618 11007 0 0 0 Files : 1112430 1223 1111207 0 0 0 Bytes : 57.451 g 192.25 m 57.263 g 0 0 0 Times : 5:01:23 0:35:55 0:00:00 4:25:27 Speed : 93509 Bytes/sec. Speed : 5.350 MegaBytes/min. Ended : Fri Apr 16 05:54:23 2010

    Read the article

  • FFMpeg-PHP Installation Error

    - by tundoopani
    While installing FFmpeg-PHP, I got this interesting error: /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function 'zim_ffmpeg_movie_getAudioStreamId': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1051: error: 'CODEC_TYPE_AUDIO' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function 'zim_ffmpeg_movie_getAudioChannels': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1089: error: 'CODEC_TYPE_AUDIO' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function 'zim_ffmpeg_movie_getAudioSampleRate': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1125: error: 'CODEC_TYPE_AUDIO' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function 'zim_ffmpeg_movie_getAudioBitRate': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1161: error: 'CODEC_TYPE_AUDIO' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function 'zim_ffmpeg_movie_getVideoBitRate': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1181: error: 'CODEC_TYPE_VIDEO' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function '_php_read_av_frame': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1204: error: 'CODEC_TYPE_VIDEO' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1215: warning: implicit declaration of function 'avcodec_decode_video' /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1219: error: 'PKT_FLAG_KEY' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function '_php_get_av_frame': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1246: error: 'CODEC_TYPE_VIDEO' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1282: error: 'AVCodecContext' has no member named 'hurry_up' /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1284: error: 'AVCodecContext' has no member named 'hurry_up' /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function '_php_get_sample_aspect_ratio': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1443: error: 'CODEC_TYPE_VIDEO' undeclared (first use in this function) make: *** [ffmpeg_movie.lo] Error 1 When I ran php -r 'phpinfo();' | grep ffmpeg, I got this: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/ffmpeg.so' - libavformat.so.52: cannot open shared object file: No such file or directory in Unknown on line 0 Any idea how I can fix this? I am running on Centos. Thanks in advance :)

    Read the article

  • PHP CLI not respecting memory limit in php.ini

    - by user13743
    I am using drush, which is a command-line php app to manage a drupal website. I am running a command to import a lot of data, which is causing me to hit php's memory limit. PHP Fatal error: Allowed memory size of 536870912 bytes exhausted ... Which is 512MB if I'm doing the math correctly (536870912 / 1024 / 1024 = 512). I've changed the directive in the php.ini that drush uses: $> drush status ... PHP configuration : /etc/php5/cli/php.ini $> grep memory /etc/php5/cli/php.ini ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 1024M But I'm still hitting the 512 MB limit! I am running in a virtual machine, whose memory settings I changed from 512 to 1025 MB of RAM to allow drush to run. $> free -m total used free shared buffers cached Mem: 1010 578 431 0 14 392 -/+ buffers/cache: 172 837 Swap: 382 0 382 So it says it has some 431 MB free, now that I've bumped the vm up to 1024. I guess half the memory is being used to run the GUI, but I don't understand how the GUI was running okay when the vm had 512 MB of ram. Why is the PHP cli still hitting a 512 MB memory limit? If it was hitting a system memory limit, shouldn't it die around 431MB, which is what the free command says is available?

    Read the article

  • Trying to reconcile global ip address and Vhosts

    - by puk
    I have been using my local machine as a web server for a while, and I have several websites set up locally on my machine, all with similar Vhost files like the one seen here /etc/apache2/sites-available/john.smith.com: <VirtualHost *:80> RewriteEngine on RewriteOptions Inherit ServerAdmin [email protected] ServerName john.smith.com ServerAlias www.john.smith.com DocumentRoot /home/john/smith # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn LogFormat "%v %l %u %t \"%r\" %>s %b" comonvhost CustomLog /var/log/apache2/access.log comonvhost </VirtualHost> then I set up the /etc/hosts file like so for every Vhost: 192.168.1.100 www.john.smith.com john.smith.com 192.168.1.100 www.jane.smith.com jane.smith.com 192.168.1.100 www.joe.smith.com joe.smith.com 192.168.1.100 www.jimbob.smith.com jimbob.smith.com Now I am hosting my friend's website until he gets a permanent domain. I have port forwarding set up to redirect port 80 to my machine, but I don't understand how the global ip fits into all of this. Do I for example use the following web site addresses (assume global ip is 12.34.56.789): 12.34.56.789.john.smith 12.34.56.789.jane.smith 12.34.56.789.joe.smith 12.34.56.789.jimbob.smith

    Read the article

  • Centos CMake Does Not Install Using gcc 4.7.2

    - by Devin Dixon
    A similar problem has been reported here with no solution:https://www.centos.org/modules/newbb/print.php?form=1&topic_id=42696&forum=56&order=ASC&start=0 I've added and upgraded gcc to centos cd /etc/yum.repos.d wget http://people.centos.org/tru/devtools-1.1/devtools-1.1.repo yum --enablerepo=testing-1.1-devtools-6 install devtoolset-1.1-gcc devtoolset-1.1-gcc-c++ scl enable devtoolset-1.1 bash The result is this for my gcc [root@hhvm-build-centos cmake-2.8.11.1]# gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/opt/centos/devtoolset-1.1/root/usr --mandir=/opt/centos/devtoolset-1.1/root/usr/share/man --infodir=/opt/centos/devtoolset-1.1/root/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --disable-build-with-cxx --disable-build-poststage1-with-cxx --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --enable-languages=c,c++,fortran,lto --enable-plugin --with-linker-hash-style=gnu --enable-initfini-array --disable-libgcj --with-ppl --with-cloog --with-mpc=/home/centos/rpm/BUILD/gcc-4.7.2-20121015/obj-x86_64-redhat-linux/mpc-install --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.7.2 20121015 (Red Hat 4.7.2-5) (GCC) And I tried to then install cmake through http://www.cmake.org/cmake/resources/software.html#latest But I keep running into this error: Linking CXX executable ../bin/ccmake /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: CMakeFiles/ccmake.dir/CursesDialog/cmCursesMainForm.cxx.o: undefined reference to symbol 'keypad' /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: note: 'keypad' is defined in DSO /lib64/libtinfo.so.5 so try adding it to the linker command line /lib64/libtinfo.so.5: could not read symbols: Invalid operation collect2: error: ld returned 1 exit status gmake[2]: *** [bin/ccmake] Error 1 gmake[1]: *** [Source/CMakeFiles/ccmake.dir/all] Error 2 gmake: *** [all] Error 2 The problem seems to come from the new gcc installed because it works with the default install. Is there a solution to this problem?

    Read the article

  • Explain why folder's permissions differ depending on HOW user is accessing server AFP vs SSH

    - by Meltemi
    Hoping someone can explain what is probably fairly obvious...but confuses me. Imagine two users with admin privileges on our server (Mac OS X Server 10.5). Call them joe & bob. both users are members of these groups: Staff Group ID: 20 Workgroup Group ID: 1025 Shared folder "devfolder" has sharing set as so: POSIX: Owner: joe read & write Group: admin read & write Other no access ACL: Workgroup Allow Read & write Question is why when looking at same folder does the ownership appear to change depending on who's doing the looking?!? Both looking at same folder on the server: From Joe's perspective: xserve:devfolder joe$ ls -l drwxrwxr-x 6 joe workgroup 204 May 20 19:32 app drwxrwxr-x 9 joe workgroup 306 May 20 19:32 config drwxrwxr-x 3 joe workgroup 102 May 20 19:32 db drwxrwxr-x 3 joe workgroup 102 May 20 19:32 doc drwxrwxr-x 3 joe workgroup 102 May 20 19:32 lib And from Bob's perspective (folder mounted on his machine via AFP): bobmac:devfolder bob$ ls -l drwxrwxr-x 6 bob _bob 264 May 20 19:32 app drwxrwxr-x 9 bob _bob 264 May 20 19:32 config drwxrwxr-x 3 bob _bob 264 May 20 19:32 db drwxrwxr-x 3 bob _bob 264 May 20 19:32 doc drwxrwxr-x 3 bob _bob 264 May 20 19:32 lib Now if Bob connects to server via SSH then his output is identical to Joe's, as expected. Can anyone tell me what the client is doing in this case and what should be expected when bob creates or updates files in this folder? What tools do I have to better understand this from the command line? Is this normal? Perhaps a "cleaner" way that wouldn't be confusing with "bob _bob"?!?

    Read the article

  • Cant access Dell BMC IPMI Over IP

    - by Bobb
    I have Dell R210 with iDRAC BMC (new name for old BMC). Which is on-board feature with shared NIC (I believe). Server is on colocation and I didnt set it up before sent there... So I asked for the remote hands to setup IPMI Over IP. They enabled it, set the IP and everything. The IP is different than main box IP. Also the box is cabled to NIC1 and the BMC supposed to share it (am I right?) I can see new IP in the Open Server Administrator (installed on the box). I tried Supermicro IPMI tool and I tried Dell ipmish.exe command like this ipmish -ip xxx -u root -p calvin sysinfo gives BMC is not detected What could be wrong? is there a diagnostics tool I can try? It must be something obvious. I just never used things like that before.... P.S. I read something about encryptions key in the Dell docs. But I understand that is for encrypted IPMI 2.0 and ipmish can use IPMI 1.5 without encryption.

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >