Search Results

Search found 15621 results on 625 pages for 'creating'.

Page 463/625 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • Corsair SSD appears completely blank and does not retain written data

    - by ebanders
    I have a 180GB Corsair SSD (model# CSSD-F180GB2-BRKT) as the primary drive in a Windows laptop. Recently the machine became unbootable after installing Windows updates. Windows installed updates before the machine shut down and the next time the machine boot up it complained about not being able to find a bootable device. After finding fixmbr unsuccessful at making the machine unbootable, I investigated a little within knoppix. Fdisk revealed an empty partition table. A scan by Testdisk came up empty. And finally 'head -c 1024 | hd' reveals all zeros. Creating a primary partition spanning the whole disk completes successfully, but after a reboot the disk appears empty again. dmesg reveals no read or write errors. smartctl indicates that the drive is healthy- although the SMART attribute values do not appear to be read properly. "Data Page | WARNING: PREVIOUS ATTRIBUTE HAS TWO" and "Threshold Page | INCONSISTENT IDENTITIES IN THE DATA" messages appear within the table of values. I don't have much experience with SSDs. Is this drive dead or something? Can anyone recommend any diagnostic tools that may be suited for diagnosing SSDs?

    Read the article

  • Can't start my phpMyAdmin

    - by vrynxzent
    i am creating my own portable server but i can't make it to run the phpMyAdmin, the mysql, php and apache is running except for phpMyAdmin. When i check Apache's error log, it states [Fri Nov 09 08:54:37 2012] [warn] pid file F:/Drive WebServer/Drive WebServer/bin/Debug/Apache2bak/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run? PHP Warning: PHP Startup: Unable to load dynamic library './php_mysql.dll' - The specified module could not be found.\r\n in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library './php_mysqli.dll' - The specified module could not be found.\r\n in Unknown on line 0 [Fri Nov 09 08:54:37 2012] [notice] Apache/2.0.64 (Win32) PHP/5.2.17 configured -- resuming normal operations [Fri Nov 09 08:54:37 2012] [notice] Server built: Oct 18 2010 01:36:23 [Fri Nov 09 08:54:37 2012] [notice] Parent: Created child process 6784 i manually assign the exact path for this F:/php/ext/php_mysql.dll PHP Warning: PHP Startup: Unable to load dynamic library 'F:/php/ext/php_mysql.dll' - The specified module could not be found.\r\n in Unknown on line 0 but still the same error. i set this option in php.ini extension_dir = "./" another error goes pops out It says libmysql.dll is missing. PHP Version : 5.2.17 Any help would be appreciated. ;)

    Read the article

  • Why can't I connect to my home SSH (SFTP) server? What am I doing wrong?

    - by Rolo
    I am new to this topic of creating a SFTP server on one's computer. I would like to be able to access the folder on my Windows XP computer via SFTP from another computer or a phone. The following is what I have done so far: I have installed SSH Windows and everything is setup correctly because I can access it (the folder on my pc) via WinSCP. I however cannot access it from my phone. It doesn't connect. The phone can be on the same wireless network as the Windows XP computer, but I would prefer to be able to access this when not in the same network. Now, from what I have read and understood, the following is the information needed to connect: 1) Host Name: This would be my computer's ip address which I access by typing ipconfig in a cmd prompt (I access this easily on my computer because I simply put in localhost or 127.0.0.1) 2) Port Number: That would be port 22 (I have also added this to my router in the port forwarding section). 3) Username: This would be my Windows XP username. This however is my full name, including my middle initial followed by a period. I am wondering if this is maybe causing problems in accessing it from my phone, since the name has spaces and punctuation (the period). 4) Password: The password of my Windows XP computer Extra Info: When I say phone, I mean an Android phone and I am using an ftp / sftp app to access my pc via the phone's cellular network (I also tried the wireless, but that didn't work as well). I have tried more than one program. On one program it tells me Connection timed out and on another it tells me "timeout:socket is not established" Also, I know that I can use the site noip, but I prefer to connect this way first. Also, because I am new to this, I would like to look into what exactly noip is doing and if they would be seeing my files as they are transferred from phone to pc. Thanking you in advance for your help.

    Read the article

  • Is there a simple context-menu add-in that could make-up for the Windows-7 status bar deficiency?

    - by DanO
    Edit: I initially asked about free disk space and selected item size. It has since been pointed out that the selected item size summary is still availiable natively in the details pane. I had read elsewhere (wikipedia) that this was removed along with disk free space, which is not the case. Only free disk space has been completely removed. Selection size is still availiable. Is there a context menu add-in out there that could show the free disk space of the relevant drive, when you right click? This would go a long way to compensating for one of the only steps backward I’ve discovered in Windows 7 so far. I doubt anyone had created one specifially for this need before windows 7 because this information was previously easily accessible in the status bar. I thought about creating one, but it has been a while since I have messed with the Shell API, and I know there are coders out there who could do it faster and better. If you’ve heard of one, or know of something else to make-up for this Microsoft misstep, I’d appreciate hearing about it. If MS were listing to the community they would already have a powertoy or add-in of some kind to un-break this. (they could release it unsupported even), as there seem to be many power users that are extremely annoyed by this feature removal decision. If anyone has seen something, please post it here. As it has been only 4 days since official Windows 7 release, I'll wait at least a week to chose an answer. Here's a picture of protoype screenshot: SU question 19232 is related.

    Read the article

  • customer wont provide ssh access - ftp only

    - by Max
    Eh, here is my problem: I am working in a webdevelopment agency (thats a problem but not the real problem, read on). Most of the time I choose the live server myself when creating a new website project. But now the customer already has a "server" (10 GB on a cheapo host!) and the "admin" refuses to give me ssh access to it. But I need to access the server via shell because many files will be transported (need to be able to upload and extract a tar) and I need to insert or create mysql dumps via command line. He argues FTP and phpmyadmin should be enough... as far as I know the webspace was just ordered to host the website, so no security critical apps are running there. How can I either convince the admin to give me the ssh login or tell management that we need our own server? Anyone with similiar experiences? This is really annoying as this is a very small project that should be done fast and now one has to fight in order to just get the work done...

    Read the article

  • PHP Web Server Solution (Apache/IIS)

    - by njk
    I apologize if this is too broad or belongs on Super User (please vote to move if it does). I'm in the process of creating requirements for an internal PHP web server to submit to our architecture team and would like to get some insight whether to use a Windows or *nix platform and what applications would be required. The server will host a small PHP application that will be connecting to SQL Server. The application will need to send mail. We would also like to incorporate a FTP server to allow files to be dropped in. From what I've read regarding a Windows platform using IIS, it seems as though IIS would only be advantageous if using a .NET or ASP application. Does IIS have mail functionality? Or how is mail traditionally configured (esp. on *nix)? Also, does IIS have directory configuration functionality like Apache does with .htaccess? For a Windows based solution; IIS (comes with FTP) Apache (has mod_ftp module) For a *nix based solution; Apache

    Read the article

  • How to make a redundant desktop system with daily snapshots? (Is btrfs ready for use?)

    - by TestUser16418
    I want to configure a desktop system in which the home filesystem would be redundant (e.g. RAID-1), and would have weekly snapshots taken. I've already done this with ZFS, the snapshot system is wonderful, and with send/recv you can easily create backups on external media. Unfortunately, at that point, I want GNU+Linux and not FreeBSD or Solaris, so I'm looking for suggestions for good alternatives. I reckon that my alternatives are: btrfs - it seems to be exactly what I need, it has snapshots and commands that allow you to easily replicate zfs send. Yet all documentation mentions that it's still experimental. I can't seem to find any actual reports on its reliability or usability issues. Can you point me to any information on that issue that could clarify whether it would be a possible choice? I have a large preference for this option, mostly because I don't want to reformat the drives when btrfs becomes ready, but I there's no information on whether it's usable at all, whether it's a silly idea to use it, etc. The question that I cannot get the answer to is what does "experimental" mean. lvm snapshots and ext4 - preferably not, since it can consume an awful amount of space when new files are created. Creating 200 GB files requres 200 GB free space and 200 GB additionally for snapshots. I also have found it unreliable -- failed metadata rewrite results in an unreadable PV. I'm wondering how btrfs would compare here. A single filesystem (ext4) on a RAID-1 array with custom COW snapshots with hardlinks (like cp -al). That's my current preference if I can't use btrfs. So how experimental btrfs is, which should I choose, and do I have any other options? What if I don't keep external incremental backups, would that affect my choice?

    Read the article

  • can't save or create files in external hard disk

    - by Rodniko
    i formatted my computer and installed new win7. i connected my external hard disk (usb connector) and i have some kind of permission problem. i can't save files after opening them and right clicking and choosing "new" shows that i can only create folders. what is wrong ? why doesn't the external hd doesn't have permission and how do i cahnge it? in microsoft they probably thought: "hmmm.... how would i make it difficult for the user to use our product..." , "we will have to make the difficulty as soon as the windows is installed...." " but how would we guarantee 100% for the user to have problems? "oh yhe! block creating files and saving them, yes!" i'm so tired of those guys... my HD is half a Terra, changing ownership of all the files inside will take a couple of hours , i need other ideas... if any...

    Read the article

  • securing communication between 2 Linux servers on local network for ports only they need access to

    - by gkdsp
    I have two Linux servers connected to each other via a cross-connect cable, forming a local network. One of the servers presents a DMZ for the other server (e.g. database server) that must be very secure. I'm restricting this question to communication between the two servers for ports that only need to be available to these servers (and no one else). Thus, communication between the two servers can be established by: (1) opening the required port(s) on both servers, and authenticating according to the applications' rules. (2) disabling IP Tables associated with the NIC cards the cross-connect cable is attached to (on both servers). Which method is more secure? In the first case, the needed ports are open to the external world, but protected by user name and password. In the second case, none of the needed ports are open to the outside world, but since the IP Tables are disabled for the NIC cards associated with the cross-connect cables, essentially all of the ports may be considered to be "open" between the two servers (and so if the server creating the DMZ is compromized, the hacker on the DMZ server could view all ports open using the cross-connect cable). Any conventional wisdom how to make the communication secure between two servers for ports only these servers need access to?

    Read the article

  • Run as another user, but also as administrator

    - by Tewr
    I am trying to debug a virtual machine (VM) running on a remote computer from my workstation (A). Both VM and A are running Windows 7 Enterprise. Apparently, I need to start the Remote Debugger Service (RDS) on VM as an administrator. Apparently, I also need to run RDS as the user Tewr logged in on A (domain: DOM). VM runs the services i need to debug, as well as the remote desktop interface with an account VMUSER in a domain called VMDOMAIN. I manage to start RDS as administrator, but then the RDS process is owned by VMUSER and that's not good enough. I also manage to run RDS as DOM\Tewr, but then not as an administrator. I have Added DOM\Tewr as an administrator on VM, but thats not good enough becuase the process is still not run as administrator. How can I run the RDS process as DOM\Tewr and "As Administrator", while logged on in windows as VMDOMAIN\VM? (note: I have tried creating an account with the same credentials / password as VMUSER, as hinted in the ms article above, but with no luck...)

    Read the article

  • Applications are being opened by IE instead of running normally

    - by Star
    I rewrote the Question to add everything that i tried so far. Many of my applications are being opened by Internet Explorer. (not all) For example when I run Firefox.exe (from shortcut) I get IE run instead, with the following URL http: // %22d/ Browser/firefox.exe%22 (I added spaces to prevent link creation) the shortcut target is: "D:\Browser\firefox.exe" when I attempted to open firefox.exe from it's folder the results were the same as the previous one I attempted to open it by cmd, so i navigated with cmd to the FF path then wrote: firefox.exe the was the same except that the URL was: http: // Firefox.exe/ when i jsut write firefox the result URL was: http: // Firefox/ (is it some kind of parameter or something??) trying the same with chrome resulted the same results as the previous tests. I tried creating a new user (adminstartor) but the problem still there. I tried every registry key with exe on it (not sure if i tried them all) no change I tried removing IE but came back by itself somehow, meanwhile IE is removed, FF and its fellow apps gave me open with window I tried reinstalling the applications but it just no use. Time Line: (as requested from @Daredev) I don't know when it happened because the computer is for the company i work for and it was like that since i got it. (The IT there gave up on the problem lon time ago!). applications were installed already are "firefox" and "XPS viewer" . applications were working after the problem everything except what uses browsing (MS help viewer, XPS viewer, firefox-even I've re installed it-, opera, chrome) that what I thought but after installing Maxthon , comodoDragon this theory was blown away. system info: 1- windows xp professional service pack 3 2- system fully patched: Yes 3- anti-virus up to date: Yes 4- same behavior when booting into safe mode: Yes

    Read the article

  • ubuntu 12.04 kvm virtual server network setup, can't get the machine to be connectable

    - by xyious
    I have worked on my Ubuntu Server host for weeks now and I just can not manage to get the virtual machines into the network.... here's what I need to do: I need to be able to create virtual machines that have IP addresses that can be reached from the outside (192.168 network). I need to be able to connect to the virtual machines through ssh, ftp, http and preferably https, anything else doesn't matter that much. So far everything seems simple enough and I have a lot of leeway in terms of IP address range and server/client configuration. I have the option of taking part of a /24 net as most IPs aren't used, and if it's absolutely necessary I have the option of creating a new /24 subnet. Also have the option of reformatting and reinstalling OS on the host and recreating the virtual machines as nothing has been done other than trying to get virtual machines to work. I would prefer if the virtual machines were just part of the normal network which would be 192.168.5.0/24. The host machine has 2 network cards so I don't even necessarily need the Host to be connectable in the same /24 network. I have tried (I think) just about everything from about 5 different tutorials on bridging (giving br0 the same IP that eth0 used to have (Host is able to connect to VM and vice versa, VM doesn't have outside network access), having eth0 set up like it always was and having br0 have a different IP (same as above), NAT with port forwarding (which I would have preferred not to use but will if it works), turning off one of the hosts network cards and just using one of them, different subnets.... etc. I do know my way around iptables fairly well.... Host is 64bit Ubuntu Server 12.04, using libvirt/kvm. edits: Local network is 192.168.5.0/24, host has static ip 192.168.5.254, GW .5.1 which is also nameserver. We have a second Local network at 192.168.10.0/24 with .10.1 GW, but both hosts and VMs were supposed to go into the .5 subnet. The .10 subnet isn't required, but it wouldn't be horrible if the Host were only accessible in the .10 subnet.

    Read the article

  • How to setup a virtual host in Ubuntu running on Amazon EC2 instance?

    - by Rade
    I have an app that's accessible via 1.2.3.4/myapp. The app is installed in /var/www/myapp. I've set up a subdomain(apps.mydomain.com) that points to 1.2.3.4. I want the server to point to var/www/myapp if I type apps.mydomain.com/myapp, how do I do that? I have experience creating virtual hosts(lots of them) locally but I'm lost because it's now in production and it's a little different. Here's my virtual host config: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName apps.mydomain.com/myapp DocumentRoot /var/www/myapp/public <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Any idea why I still see the files instead of pointing me to the document root? Just in case someone might ask, the app is based on Laravel 4 framework. It's really bad right now because anyone can access the files from the browser.

    Read the article

  • Information on the BMPP File Extension/Format

    - by Angel Brighteyes
    I am looking for information on the file type BMPp. Namely I need an application that can create this file type, preferably open source or free. Wikipedia says for BMP File Format that 'BMPp' is a "type code", which is the "mechanism used by pre-OSX Macs ... to denote a files format..." (Look in the little info-box of general information under "Type code"). Continuing my research, I found an old 2009 archived mailing list "Re: Incorrect png file type 'PNG' that talks about something related to another problem a developer is having. In the response he talks about there being variant file types, and lists BMPp as being linked to an old version of Graphics Converter. The company Lemkesoft sells Graphics Converter, which I am not willing to purchase. I can't imagine that the only program in existence to make a BMPp file is that program. There has got to be another way to make that file type, other than creating a BMP file and just renaming it to BMPp (unless of course it is really that easy)? This is the first time I've run into this file format, and it took a bit on Google, Bing, and Wikipedia to find the information that I've posted here. Any further help would be appreciated.

    Read the article

  • How to reduce memory consumption an AWS EC2 t1.micro instance (free tier) ubuntu server 14.04 LTS EBS

    - by CMPSoares
    Hi I'm working on my bachelor thesis and for that I need to host a node.js web application on AWS, in order to avoid costs I'm using a t1.micro instance with 30GB disk space (from what I know it's the maximum I get in the free tier) which is barely used. But instead I have problems with memory consumption, it's using all of it. I tried the approach of creating a virtual swap area as mentioned at Why don't EC2 ubuntu images have swap? with these commands: sudo dd if=/dev/zero of=/var/swapfile bs=1M count=2048 && sudo chmod 600 /var/swapfile && sudo mkswap /var/swapfile && echo /var/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab && sudo swapon -a But this swap area isn't used somehow. Is something missing in this approach or is there another process of reducing the memory consumption in these type of AWS instances? Bottom-line: This originates server freezes and crashes and that's what I want to stop either by using the swap, reducing memory usage or both.

    Read the article

  • How to (re)enable the "New" context menu items for an administrator when right-clicking in a folder and selecting New > X?

    - by Metro Smurf
    I just migrated from XP x86 to Win7 x64 (clean install). I had a couple of data drives in my XP x86 system that I physically moved to my Win7 x64 system. When browsing a directory in any of the transferred drives, the only option available in the 'new' context menu is "Folder", i.e., Right-Click inside a folder New Folder (this is similar behavior for Win7 when using the context menu in c:\Program Files): However, whenever creating a new folder within any of the directories, all the context menu new items are available within the new folder: Steps I've taken that have failed to add the new context menu items: Removing all security permissions from a directory and sub-directories. Replacing them with new permissions. As well as removing inheritable permissions from the parent. Taking explicit ownership of a directory and sub-directories. Combing the above two. Sample of Effective Permissions that do not work: Steps I've taken that have succeeded to add the new context menu items: Adding the "Everyone" group to the drive and giving the group explicit "Modify" privileges. Giving the "Everyone" group explicit privileges smells wrong. I'm an administrator on my system; why should I have to add the "Everyone" group as well? Adding my username to the drive and giving full permissions. Again, since I'm an administrator on my system and the administrators group already has full control of the drive/directories/folders, why should I have to explicitly add my user name to the security permissions? Finally, The Question: Is it possible to have the New Item context menu have all available options by default without having to explicitly add the everyone group or a specific user name to the security permissions? I'm suspecting that the option may not be available unless the username is explicitly added to the security permissions. Of note: I've seen the registry hacks for updating the new items context menu; my preference is to avoid such hacks and return the functionality to the expected behavior an administrator should have.

    Read the article

  • Winodws server 2003 Setup

    - by Barracksbuilder
    I work at a university maintaining the computer science department server. I am looking for a more economical way to stream line the set up of student accounts. CS students are granted a Username and password an IIS virtual directory, FTP virtual directory, and a mysql database. Server is running windows server 2003R2 (Possibly migrating to 2008R2) The server is running a domain though no students physically log a terminal into it (No computers are part of my domain.) Creating the account is a manual process. I did right a PHP script to query the Universities AD and copy the information and write it to my AD. I then have to create basically the users home directory. I tried having AD do it but since the user never physically logs in it never creates the directory. Permissions on this folder are set to User - full, Instructors (group) - full, Users (group) - read, IUSER - read. Inside of the users folder their is a "Private" folder with permissions User - full, instructors (group) - full. Next step is IIS I create a virtual directory in the default web site pointed to the users home directory so they have a website. Same goes for FTP virtual directory in the default ftp configuration to allow the users to upload files to their website. Mysql I have to create a user and password then create a mysql scheme (database) full access for the user and full access to the instructors account to be able to access the students database. All of this is done manually and takes me a week to do. The closest description is maybe a shared hosting environment. Is there a better way to do this? Scripting wise, or better structure setup?

    Read the article

  • How to keep multiple servers in sync file wise?

    - by GForceSys
    I'm currently managing a cluster of PHP-FPM servers, all of which tend to get out of sync with each other. The application that I'm using on top of the app servers (Magento) allows for admins to modify various files on the system, but now that the site is in a clustered set up modifying a file only modifies it on a single instance (on one of the app servers) of the various machines in the cluster. Is there an open-source application for Linux that may allow me to keep all of these servers in sync? I have no problem with creating a small VM instance that can listen for changes from machines to sync. In theory, the perfect application would have small clients that run on each machine to be synced, which would talk to the master server which would then decide how/what to sync from each machine. I have already examined the possibilities of running a centralized file server, but unfortunately my app servers are spread out between EC2 and physical machines, which makes this unfeasible. As there are multiple app servers (some of which are dynamically created depending on the load of the site), simply setting up a rsync cron job is not efficient as the cron job would have to be modified on each machine to send files to every other machine in the cluster, and that would just be a whole bunch of unnecessary data transfers/ssh connections.

    Read the article

  • Google Sites (via Apps) setup questions

    - by Dave
    I thought that it would be a piece of cake to set up a Google site via Google Apps, but perhaps my previous (limited) experience with web development has given me unrealistic expectations. I have actually had a really tough time finding help with the exact question that I have, which is: How do I change the home page contents??? You see, I'm used to having hosting with someone like GoDaddy, where I can just ftp in and drop my HTML files in the www folder. From research I have found that this is simply not possible with any flavor of Google Sites. That's fine, I can live with it. So let's say I have www.mydomain.com. When I hit that URL, it redirects me to a very long URL (unfortunately) like https://sites.google.com/a/mydomain.com/sites/system/app/pages/meta/domainWelcome, which just says: Google Apps Welcome to mydomain.com If you are the domain administrator get started creating your home page with Google Sites Great! I want to do that. So I click on the "If you are the..." link and end up at a screen where I can choose a template, a name, and some visibility options. If I click on My Sites, there isn't a "default" site, i.e. the one that www.mydomain.com displays. I figured that maybe I just have to create a site first, so I went ahead and did that. My first test was to create a site that was publicly accessible. I thought that maybe if I did that, the Google would decide that this must be my home page since it's the only one. But it doesn't, and I still get the "Welcome to" page. Under "More Actions", I didn't see anything interesting except for "Manage site". I went in there and had a peek around, and didn't see anything about using this as the default home page. Am I looking for something that just doesn't exist? I can't believe there isn't a way to modify the "domain welcome to" page...

    Read the article

  • Managing records of bugs and notes

    - by Jim
    Hi. I want to create a knowledgebase for a piece of software. I'd also like to be able to track bugs and common points of failure in that application. Linking knowledgebase articles to bug records would be a real boon, as would the ability to do complex queries for particular articles and bugs on the basis of tags or metadata. I've never done anything like this before, and like to install as little as possible. I've been looking at creating a wiki with Wiki On A Stick, and it seems to offer a lot. But I can't make complex queries. I can create pages that list all 'articles' with a particular single tag, but I can't specify multiple tags or filters. Is there any software that can help? I don't want to spend money until I've tried something out thoroughly, and I'd ideally like something that demands little-to-no installation. Are there any tools that can help me? If something could easily export its data, or stored data in XML, that would be a real plus too. Otherwise, are there any simple apps that allow me to set up forms for bugs, store data as XML then query and process that XML on demand? Thanks in advance.

    Read the article

  • Too many connections to Sql Server 2008

    - by Luis Forero
    I have an application in C# Framework 4.0. Like many app this one connects to a data base to get information. In my case this database is SqlServer 2008 Express. The database is in my machine In my data layer I’m using Enterprise Library 5.0 When I publish my app in my local machine (App Pool Classic) Windows Professional IIS 7.5 The application works fine. I’m using this query to check the number of connections my application is creating when I’m testing it. SELECT db_name(dbid) as DatabaseName, count(dbid) as NoOfConnections, loginame as LoginName FROM sys.sysprocesses WHERE dbid > 0 AND db_name(dbid) = 'MyDataBase' GROUP BY dbid, loginame When I start testing the number of connection start growing but at some point the max number of connection is 26. I think that’s ok because the app works When I publish the app to TestMachine1 • XP Mode Virtual Machine (Windows XP Professional) • IIS 5.1 It works fine, the behavior is the same, the number of connections to the database increment to 24 or 26, after that they stay at that point no matter what I do in the application. The problem: When I publish to TestMachine2 (App Pool Classic) • Windows Server 2008 R2 • IIS 7.5 I start to test the application the number of connection to the database start to grow but this time they grow very rapidly and don’t stop growing at 24 or 26, the number of connections grow till the get to be 100 and the application stop working at that point. I have check for any difference on the publications, especially in Windows Professional and Windows Server and they seem with the same parameters and configurations. Any clues why this could be happening? , any suggestions?

    Read the article

  • Convert Public Folder to Shared Mailbox

    - by Lilienthal
    Due to a change in company policy, all existing Public Folders (PF) have to be phased out in favour of shared mailboxes. Unfortunately, they don't seem to have any procedures or guidelines for this migration and I can't find much online either. I've already migrated one of our public folders so far as a sort of test case. Because we still use Exchange 2003, we can't create real shared mailboxes as we would in 2007 or 2010 (With New-Mailbox -Shared ... in the Exchange Shell). Instead, I simply created a new account on the AD and assigned it a mailbox. I then set the PF's permissions to read-only to keep it in a consistent state and copied the entire folder to a local PST in Outlook 2010, from which the folder was in turn copied to the new mailbox. Permissions and Folder Visible were set for all users and the migration was successful. While this works, the whole procedure feels very hackish to me and not at all efficient. I'd welcome some input on automating or at least streamlining the process. Additionally, we are unsure of what to do with our mail-enabled Public Folders. Several of these are nested under other PFs, some of which are also mail-enabled. Preserving folder structure is a key requirement and this seems impossible at first glance. I've considered creating dummy accounts for all the email addresses from our mail-enabled PFs and then setting up automated rules to forward messages to a subfolder of the new shared mailboxes, but I am not familiar enough with Exchange to know if this is even possible. Further points of concern are the Calendars and Contact lists in our public folders. I suppose I'll be forced to create new mailboxes for every one of these we have as well, then set up share permissions for their Calendar and Contact items, but would be happy to be proven wrong.

    Read the article

  • DNAT from localhost (127.0.0.1)

    - by pts
    I'd like to set up a TCP DNAT from 127.0.0.1, port 4242 to 11.22.33.44, port 5353 on Linux 3.x (currently 3.2.52, but I can upgrade if needed). It looks like the simple DNAT rule setup doesn't work, telnet 127.0.0.1 4242 hangs for a minute in Trying 127.0.0.1..., and then it times out. Maybe it's because the kernel is discarding the returning packets (e.g. SYN+ACK), because it considers them Martian. I don't need an explanation why the simple solution doesn't work, I need a solution, even if it's complicated (e.g. it involves creating may rules). I could set up a usual DNAT from another local IP address, outside the 127.0.0.0/8 network, but now I need 127.0.0.1 as the destination address. I know that I can set up a user-level port forwarding process, but now I need a solution which can be set up using iptables and doesn't need helper processes. I was googling for this for an hour. It was asked multiple times, but I couldn't find any working solutions. Also there are many questions about DNAT to 127.0.0.1, but I don't need that, I need the opposite.

    Read the article

  • How to securely control access to a backend key server?

    - by andy
    I need to securely encrypt data in my database so that if the database is dumped, hackers are unable to decrypt the data. I'm planning on creating a simple key server on a different machine, and allowing the DB server access to it (restricted by IP address on the key server to permit the DB server). The key server would contain the key required to encrypt/decrypt data. However, if a hacker were able to get a shell on the DB server, they could request the key from the key server and therefore decrypt the data in the database. How could I prevent this (assuming all firewalls are in place, DB is not connected directly to the internet, etc)? i.e. is there some method I could use that could secure a request from the DB server to the key server so that even if a hacker had a shell on the DB server they'd be unable to make those same requests? Signed requests from the DB server could make issuing these requests less trivial - I suppose that'd help increase the amount of time it'd take to compromise the key server, something a hacker probably wouldn't have much of. As far as I can see, if someone can get a shell on the DB server everything's lost anyway. This could be mitigated by using one key per data item in the DB so at least there's not a single "master" key, but multiple keys that the hacker would need to access. What would be a secure method of ensuring requests from the DB server to the key server were authentic and could be trusted?

    Read the article

  • Virtual folder for multiple sites

    - by Cups
    I am creating a very simple flat file CMS for small (multilingual) websites. The little file writing that goes on is handled by 4 scripts in a publicly available folder in each site named /edit. Given that I have 2 websites now working on that simple system: websiteA/index.php (etc) websiteA/edit/ websiteB/index.php (etc) websiteB/edit/ What is the best way of making that /edit folder "virtual" in order that these and each subsequent website owner can login to their view of /edit and yet the code only exists in one place. I do not want the website owners to have to login from a central website, but from their own /edit directory. I have already read about different solutions seemingly using the <Directory> directive in my httpd.conf declaration for each website, and also using straight mod_rewrite but admit to now becoming confused about some of the terminology. Each website has its own config file which contains path settings and so on. What in your opinion is the best way to handle this? EDIT In light of a reply, I suppose that given a virtual host directive such as this: <VirtualHost 00.00.00.00:80> DocumentRoot /var/www/html/websitea.com ServerName www.websitea.com ServerAlias websitea.com DirectoryIndex index.htm index.php CustomLog logs/websitea combined </VirtualHost> Is it possible to create an alias inside that directive for the folder websitea.com/edit ?

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >