Search Results

Search found 34489 results on 1380 pages for 'folder view'.

Page 581/1380 | < Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >

  • How can we configure the Bitnami Joomla stack to open a socket on startup?

    - by bobo
    I have deployed the Bitnami Ubuntu Joomla! 3.1.5-2 (64-bit) stack on Amazon Cloud: http://bitnami.com/stack/joomla/cloud/amazon By default, the stack is configured to run PHP using PHP-FPM. I have no problem getting the Joomla and phpmyadmin running as virtual hosts on Apache. But now, I would like to add another virtual host. The problem I am having is, I have no idea how to get the system creating a socket on startup in the following folder: bitnami@ip-172-31-15-99:/opt/bitnami/php/var/run$ ls -al total 12 drwxr-xr-x 2 root root 4096 Nov 3 20:43 . drwxr-xr-x 4 root root 4096 Oct 9 15:39 .. srw-rw-rw- 1 root root 0 Nov 3 20:43 joomla.sock -rw-r--r-- 1 root root 4 Nov 3 20:43 php5-fpm.pid srw-rw-rw- 1 root root 0 Nov 3 20:43 phpmyadmin.sock srw-rw-rw- 1 root root 0 Nov 3 20:43 www.sock bitnami@ip-172-31-15-99:/opt/bitnami/php/var/run$ I have the following /opt/bitnami/apps/mywebsite/conf/php-fpm/pool.conf file: [mywebsite] listen=/opt/bitnami/php/var/run/mywebsite.sock include=/opt/bitnami/php/etc/common-dynamic.conf include=/opt/bitnami/apps/mywebsite/conf/php-fpm/php-settings.conf pm=dynamic As it can be seen, listen points to the mywebsite.sock which does not currently exist. I did an experiment, by removing the .sock files in the /opt/bitnami/php/var/run folder and they would come back on reboot. So how can we configure it to open a socket for mywebsite on startup?

    Read the article

  • vhost.conf file in PLESK not working as intended

    - by Saif Bechan
    I have configured a vhost file for my domain but it does not seem to work. These are the steps I took, please correct me if I am wrong. First I made a file called vhost.conf in: /var/www/vhosts/*domain*/conf/vhost.conf The content of the vhost file looks like this: <Directory /var/www/vhosts/*domain*/httpdocs> php_admin_flag engine on php_admin_flag display_errors on </Directory> Now in my /etc/php.ini i set display_errors=Off After everything i rebuild with: /usr/local/psa/admin/sbin/websrvmng -a But I don't see the any errors in my page. When i turn on the display_errors in /etc/php.ini only then can I see the errors. I know for a fact that the vhost file is read, because when i type nonsense values i get an error when restarting apache saying there are errors in the vhost file. Anyone know what the problem can be. Should there be special settings in either the php.ini file or the httpd.conf file. The httpd.conf i edit is in /etc/httpd/conf/httpd.conf. Is this the file that PLESK uses or is there another, because the values i see there do not really reflect the http folders of my domain. The httpd file looks like this now. # The document root DocumentRoot "/var/www/html" # i guess this is the base directory <Directory /> Order Deny,Allow Deny from all Options None AllowOverride None </Directory> # And i guess here are all my domains located, but there aren't any here <Directory "/var/www/html"> Options None AllowOverride None Order allow,deny Allow from all </Directory> Only this directory /var/www/html is not used by me, I use the directory /var/www/vhosts. The only folder found in /var/www/html is a folder called awstats. Does plesk use other files, and where are they located. I hope this all makes sense to anyone, and i hope i can find a solution

    Read the article

  • Sync OneNote Notebooks to/on SkyDrive

    - by Sam
    I've got OneNote running on all computers in our house, using it all the time with several people and computers. The only drawback: I want to keep the copies of OneNote in sync without having to run a dedicated server myself. Right now one of my computers has a folder share, where all others sync to, but this is highly impractical since the computer is not always running. So my question is: is it possible to put the notebook files on a (private) SkyDrive Folder and have all the computers sync to there? This way all computers could keep in sync whenever they got access to the web. Can this be done? and, of course, How? [Update] Maybe I should not have taken knowledge about OneNote as granted: OneNote uses a propietary file format, but has a very good in-file-syncing, working on network shares. Generic 'just sync the complete file' won't be useful at all, because I'd just have 'file has changed on server and on client' conflicts all the time. The sync needs to know OneNote files and be able to sync the content - eg. OneNote itself needs to sync the files, not some generic sync tool.

    Read the article

  • Why can't “knife data bag from file” find existing json file on chef server?

    - by ellisera
    Summary: I'm running into a problem with "knife data bag from file", where knife doesn't recognize the .json data bag file pulled down from a remote git repo. Background: I'm currently trying to transition from chef-solo use to chef server while using the cookbooks, data bags and other chef info from our remote git repo. I've currently pulled down a copy of our git repo and set the cookbook path and data bag path in knife.rb. I also loaded the cookbooks, made adjustments, etc. Details: When trying to load our .json data bags by doing "knife data bag add from file FOLDER FILE" it looks like it worked until I do "knife data bag list" and it comes up blank. So I decided to try adding the edit option at the end to see what's being loaded, if it is. This is the error I get: knife data bag from file local_settings test.json -e nano ERROR: Could not find or open file 'test.json' in current directory or in 'data_bags/local_settings/test.json' The data bag file does exist, in the proper location, in a tested, working json file. I've also sometimes gotten an error saying "could not open data bag "local_settings". I would obviously like to keep the data bag path within the appropriate git repo folder to be able to keep track of changes in a more centralized location (our git repo, as opposed to the chef server). Any solutions, advice or pointers in the right direction are appreciated.

    Read the article

  • How do I Install fonts on Windows Web Server 2008 R2

    - by Eric Brearley
    I would like to install Arial on to our web servers. Just need to add, this is because we generate reports server-side and make them available in a number of downloadable formats (Excel, PDF etc), hence the need to have the fonts installed on the server. I have console access to our webfarm, and from the server I've copied the .ttf files and placed them in c:\fonts folder. Then I run the following VBScript on the server. ' VBScript to install fonts on Blade Servers ' Arial font-family Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace("c:\fonts") Set objFolderItem = objFolder.ParseName("arial.ttf") objFolderItem.InvokeVerb("Install") Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace("c:\fonts") Set objFolderItem = objFolder.ParseName("arialbd.ttf") objFolderItem.InvokeVerb("Install") Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace("c:\fonts") Set objFolderItem = objFolder.ParseName("arialbi.ttf") objFolderItem.InvokeVerb("Install") Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace("c:\fonts") Set objFolderItem = objFolder.ParseName("ariali.ttf") objFolderItem.InvokeVerb("Install") Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace("c:\fonts") Set objFolderItem = objFolder.ParseName("ariblk.ttf") objFolderItem.InvokeVerb("Install") msgbox "Fonts installed" I get the message box, but no font installation pop-ups like I do when I run this script on my desktop. The fonts do not get installed, they do not sure in the font selection dialogue in notepad (on the web server) and we get the asp.net exception "Font 'Arial' cannot be found.". Have also restarted the server. I have also tried copying the .ttf files to the c:\windows\fonts folder and restarting the server. What do I need to do to install fonts on Windows Web Server 2008 R2?

    Read the article

  • IMAP proxy as a POP3 hub?

    - by mailman stan
    Simple scenario, complicated technology: One family receiving mail from five email addresses via POP3 into one Outlook inbox on a single PC. Now we'd like to be able to replicate that single inbox across multiple devices (eg. desktop PC, laptop, netbook, smartphone). If we continue using POP3 as the mail transfer protocol, messages will be downloaded to one device and will not be visible to the others; replies will likewise be isolated on the sending machine. If we switch to IMAP, I understand that we can have multiple devices maintaining a shared view of an inbox hosted at the server end, but what about multiple accounts? I tried changing the account configuration in Outlook to fetch from the mail providers' IMAP service instead of POP3, which does give a shared view across multiple devices but also causes Outlook to create a separate inbox and PST for each account. This is awkward because it means there are five separate folders that need to be checked, and Outlook tools like search filters and rules don't seem to work across accounts. To get what I want (five accounts delivered into one shared mailbox) it seems that I would need some sort of intervening server that collects mail (using POP3) from all our accounts into a single inbox while preserving the original destination addresses, and then serves it up to all our devices using IMAP. Is this workable? Is it a good approach? Is there an easier way?

    Read the article

  • File upload permissions issue on Windows Server 2008 R2 IIS 7.5 PHP 5.3 with Drupal v.7.26

    - by Taras
    I have website on Drupal version: 7.26 OS on server is Windows Server 2008 R2 Web server $_SERVER["SERVER_SOFTWARE"]: Microsoft-IIS/7.5 Server API: CGI/FastCGI Core PHP Version: 5.3.28 file_uploads: On post_max_size: 75M upload_max_filesize: 50M upload_tmp_dir: C:\inetpub\wwwroot\tmp memory_limit: 128M open_basedir: C:\inetpub\wwwroot;C:\inetpub\wwwroot\tmp When I go to /admin/config/media/file-system I see error messages: The directory sites\default\files exists but is not writable and could not be made writable. The directory tmp exists but is not writable and could not be made writable. Public file system path: sites\default\files Temporary directory: tmp I have set permissions on folders C:\inetpub\wwwroot\tmp : IIS_IUSRS : Full control C:\inetpub\wwwroot\sites\default\files : IIS_IUSRS : Full control I am working as Administrator user: C:\Users\Administrator\Downloadsecho %username% Administrator I can`t change Read Only Attributes for these folders. Every time I do this change and press Apply button and Apply changes to this folder, subfolders and files is checked and press OK button it displays Applying attributes... dialog when it finishing I press OK button on folder properties dialog closing it. When I open Properties dialog once again I see Read-only is checked again. How can I fix it?

    Read the article

  • How to generate customized sudoers files in puppet depending on the environment they're deployed to?

    - by gozu
    the sysadmins are present in the sudoers files of all environments, but other sudoers are not. Different environments all have slightly different sudoers. Most of the time, 90% of users are the same, and 10% vary so we cannot have only one sudoers file for everything. Right now, we are using puppet with 10 different files with names like sudoers.production1, sudoers.production2, sudoers.production3, sudoers.testing1, sudoers.staging1 and so forth. Puppet then picks the file to deploy based on the server's $domain (ex: dbserver.staging1.acme.com) or $hardwaremodel. It works fine but it's a nightmare to maintain so many files. I'd like to autogenerate sudoers files based on the server's domain and have only one big file with all the sudoers permissions for all users and all environments. Something that looks like: User_Alias ADMINS = abe, bob, carol, dave case $domain { "staging1.acme.com" { #add dev1,dev2,tester1,tester2 to sudoers file } "testing2.acme.com" { #add tester1, tester3, tester4 to sudoers file } What's the best way to go about this? Suggestions for alternatives are welcome. I'd appreciate any tips. Update 1: For security reasons, we'd rather not concatenate a bunch of files from a folder located on a puppet client in case someone puts a file in there (maliciously or not) and either breaks the combined file or inserts something in it. Most importantly, for usability, we'd like to keep the number of sudoers related files (fragment or complete) on puppet server to either 3 (prod/stage/test) or preferably 1 file. this file would (somehow) generate sudoers files on the puppet server and send one customized file to each puppet client. The purpose of this would be only searching for a username in a single file and removing it quicker than doing it on 11 files. When adding a user to a bunch of environments, it won't be as quick, but only one file would need to be opened and looked at, greatly reducing the chances of an omission. our Sudo version is 1.6.9p8 so we can't use /sudoers.d folder, only a sudoers file.

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Example: If you have a, lets say, full backup and then you take a transac log backup and this transac log backup is lets say 200 MB and now you immediatelly take another transac log backup, this last transac log backup should be considerably smaller than the first one because no or almost no transaction occured between these two backups, right? At least, that's what I've been assuming. What happens in my case is that this second backup is pretty much the same size as the first one and I am wondering if the reason for that is because I moved the first transac log backup to a different folder so now sql server thinks that all I have is just a full backup and it then gets all the transactions that happened since the full backup and puts it in the 2nd transac log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • MSI Installer error 2203; how to force permissions on installer directory?

    - by goober
    [Cross-Posted on StackOverflow.com as well because the question relates to development. Feel free to let me know where it best belongs.] Hi all, I'll try to bullet-point to keep it short: Background / Issue Trying to install ASP.NET MVC 3 RC on my Windows 7 machine. Uninstalled other versions of MVC (2 and 3 Beta 1). Ran the installer -- got a generic error, 2203. Log files said that it was a permissions error on C:\Windows\Installer. Checked C:\Windows\Installer -- sure enough, it's marked as read-only. I un-checked "Read-Only" in the folder properties and applied. It appears to open the dialog and apply to all files. However, when clicking properties again, the read-only box is backed to checked. Checked the security tab of the folder -- both system and the Administrators group have full access. I checked ownership -- the Administrators group is listed as an owner. Verified that I'm in the system as an Administrator (in fact, the only account in the Administrators group besides Administrator). So, what gives? Thanks in advance for any help you can provide!

    Read the article

  • Preventing access to files if a user types the full url on the address bar

    - by bogha
    i have a website, some folders on the websites contains images and files like .pdf , .doc and .docx . the user can easly just type the address in the url to get the file or display the photo http://site/folder1/img/pic1.jpg then boom.. he can see the image or just download the file my question is: how to prevent this kind of action, how can i guarantee a secure access of the files. any suggestions UPDATE TO CLARIFY MY IDEA i don't want any user who is browsing the website to get access to these files normally by just writing the URL of the file. those files are a CV files, they are being uploaded by the users to a specific folder on the server which we host outside the company. those files are only being viewed by the HR people through a special system. that's the scenario we want. i don't want a WEB GEEK who just wants to see what files has been uploaded to this folder to download them easly to his/her computer and view them or publish them on the internet. i hope you got my idea

    Read the article

  • Do I have to do anything for file ACLs to work both before and after I reformat a computer?

    - by Zian Choy
    Let's say that I have a computer running Windows Vista with 2 users: Alice and Bob. Alice is the admin and Bob is a normal user. They each have files in their respective My Documents folders and Bob is not allowed to view Alice's files and Alice has to jump through a UAC elevation to view Bob's. If Alice copies all the files on the computer to an external NTFS-formatted hard drive with the following 2 commands: robocopy "E:\Bob's Files" "C:\Users\Bob\My Documents" /MIR robocopy "E:\Alice's Files" "C:\Users\Alice\My Documents" /MIR And then reformats the hard drive, installs a fresh copy of Windows, and creates 2 users named Alice and Bob on the computer, then will everything in the first paragraph be true after Alice copies the files back onto the internal hard drive? Assume that when the files are copied back over, she logs in as Bob and then copies Bob's files and likewise with her own files. Possibly relevant: Alice and Bob also have passwords on their user accounts and they create new passwords after the computer is reformatted. The main post has been tweaked slightly to make the question clearer. Answers that predate April 2011 are referring to an earlier version of this post.

    Read the article

  • Clone virtual machine with Server 2008 R2 and Hyper-V?

    - by bwerks
    Hi all, I've recently just started working with Hyper-V, and so far it's quite nice. However, I've been running into problems with what seems like it should be the most basic of workflows. I've set up a baseline Server 2008 R2 configuration, and exported it with the intention of using the export for cloning. I entered "C:\Exports\" as the export folder. However, I run into problems when I try to import the image. From the Hyper-V manager, I select "Import Virtual Machine" and in the resulting window I entered "C:\Exports\BuildServer\" as the folder, set the radial to "Copy the virtual machine (create a new unique ID)" and checked the checkbox for "Duplicate all files so the same virtual machine can be imported again." Doing so results in the following error: "Import failed. Import task failed to copy file from 'H:\Exports\BuildServer\Virtual Hard Disks\BuildServer.vhd' to 'C:\Hyper-V\Virtual Hard Disks\BuildServer.vhd': The file exists. (0x80070050)" Have I somehow messed something up in configuration? Or is this a known thing? I've read it should be possible to clone VMs by copying them in the filesystem but I'd prefer to keep things in the management Ui if possible.

    Read the article

  • Using GitOAuthPlugin for Jenkins - not working as expected

    - by Blundell
    I need some clarity and maybe a fix. I'm using this plugin to authorise who views our Jenkins ci server: https://wiki.jenkins-ci.org/display/JENKINS/Github+OAuth+Plugin As I understand it anyone who is auth'd to view one of our github project's can also login to our Jenkins box. This works I thought it would also allow the person logging in to only view the Project that they have GitHub permission on. For instance. Three projects on GitHub (A,B,C). Three builds on Jenkins. User 1 has Git access to all 3 projects (A B C). User 2 has Git access to only 1 project (A). When logging into Jenkins: User 1 can see all 3 projects ( this works ) User 2 can only see project A The problem is User 2 can also see all 3 projects when they should only see 1! Have I got this correct, and if so is this a bug? I have the settings set in Jenkins configuration Github Authorization Settings. Here we have some admin users. One organization. And none out of the 4 checkboxes ticked. (User 2, is not an admin, is not part of the org). The plugin is open sourced here: https://github.com/mocleiri/github-oauth-plugin I was trying to get Jenkins to print me the Logs from the plugin but I also failed at viewing these (to see if there was an issue). I followed these instructions: https://wiki.jenkins-ci.org/display/JENKINS/Logging It's the same concept as outlined below but using GitHub rather than manually selecting users: https://wiki.jenkins-ci.org/display/JENKINS/2012/01/03/Allow+access+to+specific+projects+for+Users%28Assigning+security+for+projects+in+Jenkins%29 Have I got this right or wrong? Is it possible to auth a Jenkins user to only see one project?

    Read the article

  • Wildcard DNS, VirtualHosts on apache2, 404 for unused subdomains

    - by niel
    On an Apache2 server linked to by a DNS that includes a wildcard entry, e.g. *.example.com, subdomains that are not defined as ServerNames in any VirtualHosts point to the first defined VirtualHost, in my example this is 000-default. My Question:How would one get unused subdomains (subdomains not used in any virtualhosts) to return a 404 error to the requesting client? This must preferably show in server logs as a 404 as well. I have looked into the following possibilities: Redirecting any invalid subdomain to the home page or some other page.The problem with this method is, when someone links to your site as this.company.sucks.example.com, the client will see your home page or in my case 000-default if I do not redirect. Thanks, to Mike for pointing this out. (regex for "suck", etc definately not an option) Let the default VirtualHost point to a non-existent directory.Apache does not like this one bit, warning with every reload. Beyond the warning, everything seems fine. This seems like a hack. Does this seem like a problem (however small) to anyone? Point the default VirtualHost to a folder where the index.php is forbidden, thus creating a 403 status code.This is confusing and makes things like the following overly complicated: Say, for example, you use a subdomain per user (a big reason to use wildcard DNS, apparently), and users have the ability to view each others profiles at username.example.com. This solution is confusing to the user and completely not what I want to do. My ideal sollution will let the user know there is nothing to view at the url he entered. Preferably with a 404 and an error log entry for the address entered (not some other address). Any help would be greatly appreciated!

    Read the article

  • Thunderbird doesn't show folders on a new Dovecot install

    - by Zoran Zaric
    Hey, I set up a new mailserver with postfix and Dovecot some days ago, everything is working except for Thunderbird not showing any folders. Evolution shows me all folders. I migrated from a Courier install using imapsync. In the filesystem the folders don't have a INBOX in their name, so the tho folders ar called .Folder 1 not .INBOX.Folder 1. This is the output of dovecot -n: # 1.0.10: /etc/dovecot/dovecot.conf Warning: mail_extra_groups setting was often used insecurely so it is now deprecated, use mail_access_groups or mail_privileged_group instead base_dir: /var/run/dovecot/ log_timestamp: “%Y-%m-%d %H:%M:%S ” protocols: imap pop3 listen(default): *:143 listen(imap): *:143 listen(pop3): *:110 disable_plaintext_auth: no login_dir: /var/run/dovecot//login login_executable(default): /usr/lib/dovecot/imap-login login_executable(imap): /usr/lib/dovecot/imap-login login_executable(pop3): /usr/lib/dovecot/pop3-login first_valid_uid: 1001 last_valid_uid: 1001 mail_extra_groups: vmail mail_access_groups: vmail mail_location: maildir:/var/vmail/%d/%u maildir_copy_with_hardlinks: yes mail_executable(default): /usr/lib/dovecot/imap mail_executable(imap): /usr/lib/dovecot/imap mail_executable(pop3): /usr/lib/dovecot/pop3 mail_plugin_dir(default): /usr/lib/dovecot/modules/imap mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3 pop3_uidl_format(default): pop3_uidl_format(imap): pop3_uidl_format(pop3): %08Xu%08Xv auth default: user: nobody passdb: driver: sql args: /etc/dovecot/dovecot-sql.conf userdb: driver: sql args: /etc/dovecot/dovecot-sql.conf socket: type: listen client: path: /var/spool/postfix/private/auth mode: 432 user: postfix group: postfix master: path: /var/run/dovecot/auth-master mode: 432 user: vmail group: vmail Thanks!

    Read the article

  • Start Menu Shortcuts resisting reorganisation

    - by seusr
    Running Win7. The Start Menu was getting too big, so I've navigated to C:\ProgramData\Microsoft\Windows\Start Menu\Programs and created some sub-folders in various categories [image editors, system utilities, media players, security for example] in which to group the shortcuts. Having created the group folders, I've dragged and dropped the Start Menu shortcuts into them. However it seems all Start Menu shortcuts are not alike. Some programs, their shortcut folders having been moved from the Start Menu, spawn NEW [but empty] folders bearing their company names in the top level of the Start Menu eg AVICodec Some programs don't have Start Menu shortcut folders that can be seen in WinExplorer [or equiv] at all, even though they turn up under the Start Menu itself. It seems these can be moved where they are visible i.e. in the Start - All Programs by dragging and dropping not just onto the desired folder but by hovering to expand the desired folder and then dropping between other shortcuts there. Eg Daum, Format Factory That's not such a problem, but why it's happening is a mystery. What's going on, and how to make these shortcuts behave normally or otherwise control them properly?

    Read the article

  • Installer not being updated ( probably because of Windows 7 file cache )

    - by Sithu Kyaw
    I'm creating an installer for my Visual FoxPro application using ISTool and Inno Setup. It is ok for me for the first time. But, I updated my code and re-built the EXE file. Then, compiled the installer again. I found that my update was not compiled into the installer and I did not see the update in my running application. I noticed that the EXE file, which was built by VFP, was updated properly. It seems the installation script did not output the updated file. But, when I changed folder names, it did work. I don't want to change folder names whenever I run that installation script. It is not a good idea actually. I think it is because of Windows 7 cache system. Mine is Windows 7 Home Premium Service Pack 1. For example, My previous output file is located at C:\path\to\myinstaller.exe When I compile the installation script, the output file there should be overwritten, but it was not as expected. Although I deleted the file, it did not work. When I changed to output file path as C:\newpath\to\myinstaller.exe, I got the fix, but it is not a solution what I'm looking for. Does anyone how to do that? [Edit] I found that the installed directory was not updated properly. For example, I installed the program to C:\Program files\MyInstalledApp When I run the installer again, that installation directory should be overwritten, but failed. Thus, I got to uninstall the app before I re-install it. Is there any fix for this?

    Read the article

  • Can't install NPM after installing Node on EC2 Linux instance?

    - by frequent
    I'm trying my first attempt on getting a node server set up on an amazon ec2 linux instance. I think I made it quite far. First problem I ran into was when trying to make Node the connection timed out after a while, so I need three attempts until I got this: LINK(target) /home/ec2-user/node/out/Release/node: Finished touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_header.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_provider.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_ustack.stamp touch /home/ec2-user/node/out/Release/obj.target/node_etw.stamp make[1]: Leaving directory `/home/ec2-user/node/out' ln -fs out/Release/node node Which tells me, "Node is done", although I'm not sure it is also working as it should. Following this,this and this tutorial, I'm now stuck at installing npm. I think I first cloned into the wrong folder, which always gave me error 127, but even if I'm doing this: cd ~ git clone git://github.com/isaacs/npm.git cd npm sudo -s PATH=/usr/local/bin:$PATH make install I'm still getting this: #after cloning# make[1]: Entering directory `/root/npm' node cli.js install bash: node: command not found make[1]: *** [node_modules/.bin/ronn] Error 127 make[1]: Leaving directory `/root/npm' make: *** [man/man3/start.3] Error 2 Question:: Since I'm pretty much a newby at everything I'm trying here, can someone please tell me what I'm doing wrong and how to get npm to install? Also, in case I cloned into the wrong folder, is there a way to remove the "false clone" or is this not written to disk until I call make install and I don't need to worry? Thanks for helping out!

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

  • email output of powershell script

    - by Gordon Carlisle
    I found this wonderful script that outputs the status of the current DFS backlog to the powershell console. This works great, but I need the script to email me so I can schedule it to run nightly. I have tried using the Send-MailMessage command, but can't get it to work. Mainly because my powershell skills are very weak. I believe most of the issue revolve around the script using the Write-Host command. While the coloring is nice I would much rather have it email me the results. I also need the solution to be able to specify a mail server since the dfs servers don't have email capability. Any help or tips are welcome and appreciated. Here is the code. $RGroups = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query "SELECT * FROM DfsrReplicationGroupConfig" $ComputerName=$env:ComputerName $Succ=0 $Warn=0 $Err=0 foreach ($Group in $RGroups) { $RGFoldersWMIQ = "SELECT * FROM DfsrReplicatedFolderConfig WHERE ReplicationGroupGUID='" + $Group.ReplicationGroupGUID + "'" $RGFolders = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGFoldersWMIQ $RGConnectionsWMIQ = "SELECT * FROM DfsrConnectionConfig WHERE ReplicationGroupGUID='"+ $Group.ReplicationGroupGUID + "'" $RGConnections = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGConnectionsWMIQ foreach ($Connection in $RGConnections) { $ConnectionName = $Connection.PartnerName.Trim() if ($Connection.Enabled -eq $True) { if (((New-Object System.Net.NetworkInformation.ping).send("$ConnectionName")).Status -eq "Success") { foreach ($Folder in $RGFolders) { $RGName = $Group.ReplicationGroupName $RFName = $Folder.ReplicatedFolderName if ($Connection.Inbound -eq $True) { $SendingMember = $ConnectionName $ReceivingMember = $ComputerName $Direction="inbound" } else { $SendingMember = $ComputerName $ReceivingMember = $ConnectionName $Direction="outbound" } $BLCommand = "dfsrdiag Backlog /RGName:'" + $RGName + "' /RFName:'" + $RFName + "' /SendingMember:" + $SendingMember + " /ReceivingMember:" + $ReceivingMember $Backlog = Invoke-Expression -Command $BLCommand $BackLogFilecount = 0 foreach ($item in $Backlog) { if ($item -ilike "*Backlog File count*") { $BacklogFileCount = [int]$Item.Split(":")[1].Trim() } } if ($BacklogFileCount -eq 0) { $Color="white" $Succ=$Succ+1 } elseif ($BacklogFilecount -lt 10) { $Color="yellow" $Warn=$Warn+1 } else { $Color="red" $Err=$Err+1 } Write-Host "$BacklogFileCount files in backlog $SendingMember->$ReceivingMember for $RGName" -fore $Color } # Closing iterate through all folders } # Closing If replies to ping } # Closing If Connection enabled } # Closing iteration through all connections } # Closing iteration through all groups Write-Host "$Succ successful, $Warn warnings and $Err errors from $($Succ+$Warn+$Err) replications." Thanks, Gordon

    Read the article

  • script to run sox to combine multiple mono tracks to stereo

    - by Ze'ev
    I have a folder full of .wav audio files. Some are stereo, most are mono splits. The mono split pairs are all named foo bar track.L.wav and foo bar track.R.wav I can use the command line tool sox to combine a mono pair into 1 stereo track like this: sox -M track1.L.wav track1.R.wav track1.Stereo.wav where the first 2 files are the mono pairs, and the third is the output stereo file. This is great, but I'd like to have a script that will automatically find all the mono pairs and combine them into stereo files. I.e., I need it to find all files which have the same name except for the .L. and .R. before the extension, and run sox on them, outputting to a new file with the same name without the L/R suffix. For example, if my folder contains these files: track1.L.wav track2.L.wav track3.L.wav track4.L.wav track1.R.wav track2.R.wav track3.R.wav track4.R.wav track6.wav track7.wav I need to run these commands: sox -M track1.L.wav track1.R.wav track1.Stereo.wav sox -M track2.L.wav track2.R.wav track2.Stereo.wav sox -M track3.L.wav track3.R.wav track3.Stereo.wav sox -M track4.L.wav track4.R.wav track4.Stereo.wav Here's where I am so far: for file in ./*.L.wav; do file2=`echo $file | sed 's_\(.*\).L.wav_\1.R.wav_'`; out=`echo $file | sed 's_\(.*\).L.wav_\1.STEREO.wav_'`; echo $file - $file2 - $out; done That works, but when I replace the echo line with sox -M $file $file2 $out; it doesn't work; spaces in the filenames cause it to fail.

    Read the article

  • restrict access to IIS virtual directory from root website

    - by Senthil
    I have two domains (domain1.com and domain2.com). Both of them use the same Windows hosting server with IIS7. One of the domains is being called the "primary domain" by my hosting provider (GoDaddy) and it always points to the root folder that I was given. For the other domain, I have created a virtual directory in IIS and pointed it there. The folder structure is like this - root/ --Default.aspx --SomeFile.aspx --domain2folder/ ----Default.aspx ----Domain2SomeFile.aspx So, if I type domain1.com, I see the regulakr Default.aspx. But if I type domain2.com, I am shown the contents of domain2folder as if it were a separate web application - I think that is what IIS virtual directory is meant for. Well and good. But the problem is, when I type http://domain1.com/domain2folder, I see the domain2's website! But I don't want that to be shown when I use the path like that from domain1. Only if they use domain2.com, user should be able to see those contents. How can I do that? Hope I am making sense. Thanks.

    Read the article

  • Online Storage and security concerns

    - by Megge
    I plan to set up a small fileserver. I already own a small server at HostEurope (VirtualServer L, 250GB space), but they don't offer enough space (there is the HostEurope Cloud, but paying for bandwidth isn't an option here, video-streaming should be possible) Requirements summarized: Storage: 2TB, Users: ~15, Filesizes: < 100GB, should be easily reachable (Mount as a networkdrive or at least have solid "communication" software) My first question would be: Where can I get halfway affordable online storages? And how should I connect them to my server? Getting an additional server is a bit overkill, as I know no hoster which allows 2 TB on a small 2 Ghz Dual Core 2 GB RAM thingy (that would be enough by far, I just need much space), and connecting it via NFS or FTP over Internet seems a bit strange and cripples performance. Do you have any advice where I could get that storage service from? (I sent HostEurope a custom request today, but they didn't answer till now. If they can provide me with that space, this question will be irrelevant, but the 2nd one is the more important one anway, don't do much more than recommend me some based on experience, you don't have to crawl hours through hosting services) livedrive for example offers 5 TB for 17€ / month, I'd be happy with 2 TB for 20 €, the caveat is: It doesn't allow multiple users, which leads me to my second question: Where are the security problems? Which protocol is sufficient (I want private and "public" folders etc. the usual "every user has its own and a public space"-thing), secure and fast? (I'd tend to (S)FTP, problem with FTP is: Most of those hosting services don't even allow FTP with mutliple users and single users lead me into "hacking" a solution (you could map the basic folder structure on the main server and just mount every subfolder from the storage, things get difficult with a public folder with 644 permissions though) Is useing something like PKI or 802.1X overkill for private uses?

    Read the article

  • IIS 7.5 Siteminder Does not protect ASP.net MVC requests

    - by HariM
    We are trying to use ASP.Net MVC with Siteminder for Single Sign on. This is on Windows Server 2008 R2 with IIS 7.5. Siteminder Agent version 6QMR6. Problem : Siteminder protects physical files that are exist. And it is not protecting the folder when we try to access a non existed file. It must redirect to login page even if the file doesn't exist when the user is accessing a protected folder. How to configure in IIS 7.5 that Do not verify a file exist, before authentication by siteminder. SiteMinderWebAgent is a Handler(WildCard Script Map) we created using the ISAPI6WebAgent.dll How to Protect ASP.Net MVC Request with Siteminder? (Added this as My previous question did not solve the problem). MVC Request shows up in IIS Log but not in Siteminder log. Update : Microsoft Support says currently IIS7.5, even in earlier versions doesnt support wildcard mappings on any two Isapi Handlers with * wild card. Currently in my case Siteminder has * wildcard and asp.net mvc (handler is aspnet_isapi) has * wildcard to handle the reqeusts. Ordered priority doesnt work in the wild card mappings case with Just *. Did not convinced with the answer but will wait till tomorrow for them to get back.

    Read the article

< Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >