Search Results

Search found 35708 results on 1429 pages for 'default copy constructor'.

Page 480/1429 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • Enable SSL with Jetty 8

    - by Jerec TheSith
    I received certificates from GoDaddy an I'm trying to enable SSL with Jetty but receive an error 107 SSL protocol error when connecting to https://server.com:8443 I generated the keystore using these commands : keytool -keystore keystore -import -alias gd_bundle -trustcacerts -file gd_bundle.crt keytool -keystore keystore -import -alias server.com -trustcacerts -file server.com.crt and placed it in /opt/jetty/etc/ And used the following configuration in jetty.xml : <Call name="addConnector"> <Arg> <New class="org.eclipse.jetty.server.ssl.SslSelectChannelConnector"> <Arg> <New class="org.eclipse.jetty.http.ssl.SslContextFactory"> <Set name="keyStore"><SystemProperty name="jetty.home" default="."/>/etc/keystore</Set> <Set name="keyStorePassword">**password1**</Set> <Set name="keyManagerPassword">**password1**</Set> <Set name="trustStore"><SystemProperty name="jetty.home" default="."/>/etc/keystore</Set> <Set name="trustStorePassword">**password1**</Set> </New> </Arg> <Set name="port">8443</Set> <Set name="maxIdleTime">30000</Set> <Set name="Acceptors">2</Set> <Set name="statsOn">false</Set> <Set name="lowResourcesConnections">20000</Set> <Set name="lowResourcesMaxIdleTime">5000</Set> </New> </Arg> </Call> Am I missing something in jetty's configuration ?

    Read the article

  • Firefox: Where does firefox store the opened windows/tabs/urls on Crash for Restoring?

    - by jens
    Hello in which location and file does firefox save, the last windows I had opened (when firefox crashed). I have a complete "hot dump" copy of a file system and need to restore the state firefox was when the system crasehd but I cant not restore the full backupitself. I can only extract the files of firefox, but I do not know in which files i have to search for the urls that were last opened when the snapshop of the whole filesystem was done. thanks!!!

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • Extracting init script from bult-in intrfs into Linux bzImage

    - by Maciej Piechotka
    I have following problem - I damaged my system (Gentoo - by rebuilding using gcc 4.5) beyond repair. I unmounted /home, copied /etc + other important files and I've started reinstalling system. However I forgot to copy init script. It is still present in kernel image that I have. How to extract it? Please note that initrd is not a separate file but is in the kernel image.

    Read the article

  • How to merge arbitrary snapshot into base vdi in Virtualbox

    - by jmathew
    I botched a transfer of a VM from one harddisk to the other. Now I'm left with the base vdi and a whole bunch of snapshots. My steps Copied old VM directory over to new HDD Deleted old VM and added new VM using using Machine-add and providing the old XML file Couldn't add base vdi file due to conflict so changed the UUID of base vdi with VBOXMANGE.EXE internalcommands sethduuid <path/to/vdi> Attempt to rollback to a snapshot, but it seems the VM is looking for the snapshots on the old HDD (which is formatted and gone) This is the error (networked is the name): Failed to restore the snapshot networked of the virtual machine lfs. Could not open the medium 'H:\vm\ft.vdi'. VD: error VERR_PATH_NOT_FOUND opening image file 'H:\vm\ft.vdi' (VERR_PATH_NOT_FOUND). Result Code: E_FAIL (0x80004005) Component: Medium Interface: IMedium {53f9cc0c-e0fd-40a5-a404-a7a5272082cd} The old HDD was drive H: the new one is drive N: How can I modify the snapshots/VM to look in N:\vm\ft.vdi for the base vdi? I've already set the default settings in VirtualBox in general (default vm/vm snapshot location). Or if not that how can I merge the old snap shot with the base vdi given that the only things that have changed is the base vdi's UUID?

    Read the article

  • Create a PDF that defaults to flip on short edge when printed double-sided

    - by user568458
    We're creating a 2-page PDF brochure with a target audience who will print it on their regular office or home printers. If it is printed on a double-sided printer (common in offices), it'll come out correctly if set manually by the user to "Flip on short edge", but will come out with the second page upside down if default settings are used (flip on long edge). Our target audience aren't very tech-literate, and we've found that even within our own office network there is variation in the location of the 'Flip on short edge' setting - so it isn't realistic to give everyone who downloads the PDF instructions on how to change this setting or to expect everyone to find out how to change the setting off their own backs. So, when creating a PDF (ideally using Adobe InDesign or Acrobat, but if other software or hacking is needed that's fine...), is there a way to configure the PDF file itself so that when printed double-sided with default settings, it flips on the short edge? If possible, it'll be useful supplementary info to know how reliable any such methods are across different PDF readers (e.g. Adobe Reader, Acrobat, Mac Preview, inbuilt browser readers (e.g. chrome), FoxIt, etc). If questions about content creation like this aren't a great fit here, feel free to migrate it to the graphic design stackexchange site - this question seems to fall half way between the two sites

    Read the article

  • importing a mysqldump file does not import triggers due to some permission problems

    - by user51792
    Hello, Whenever I try to import a mysqldump export on another server, the triggers are never created, and if I remember correctly, I get an error messages that super user permission is required. IF I remove the definer it usually works but if there is a way I would prefer not having to edit the sql file. When I just simply copy over the mdi, mdy and frm files everything works perfectly. How could I import a mysqldump file so that triggers are created as well?

    Read the article

  • Boot From USB Stick

    - by Nathan
    I formatted a 16 GB USB Stick today so I could boot from it and that works great. The problem I have is it won't let me copy a 7GB Ghost image over to USB, it says there isn't enough space. When I look Windows shows there is 14GB available. Can anyone give me some insight into this issue?

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • mencoder , ripping dvd but audio seeking doesn't work

    - by nos
    I'm trying to rip a dvd on linux using mencoder I'm doing: mencoder dvd://0 -ovc lavc -lavcopts autoaspect -alang en -oac copy -o dvd.avi And the video + audio is nice, however if I seek/fast forward when playing the .avi file - the audio always starts from scratch again - what am I doing wrong ?

    Read the article

  • How to host multiple FLEX applications in IIS7

    - by Devtron
    Hello, I manually deploy a FLEX application to my web server (IIS 7). There are two virtual directories, 1.) Default 2.) myFlexApp1. myFlexApp1 is where my working FLEX application resides. I now need to deploy a different FLEX application (let's call it myFlexApp2) to the same web server. I set up a virtual directory for [myFlexApp2] and it complains about the "bindings" using port 80, which is already used by [myFlexApp1]. I have tried to give them separate host names in their bindings properties. For example, myFlexApp1.mydomain.com and myFlexApp2.mydomain.com. I can never get [myFlexApp2] to show from an external browser. I was able to get only one or the other to display, but never could run both. Here is what I need: myFlexApp1.mydomain.com -- myFlexApp1 calendar.mydomain.com -- myFlexApp2 test.mydomain.com -- myFlexApp1 where test.mydomain.com is the default URL. Is this possible? What am I doing wrong? I even tried to edit the hosts file in [C:\Windows\System32\drivers\etc] but that didnt work either. How can I serve up two FLEX applications on IIS 7? It shouldn't be this hard!

    Read the article

  • Different external ip addresses from different sites

    - by user630286
    My router is ClearOS 6(Centos 6). In my router, I have two external (internet) network connections from two ISP's. The primary connection is eth2 connected to a cable modem and the second one is ppp0 connected to a dsl modem. I have assigned eth2 as the primary connection (with a high metric value). In fact this is done through clearos's multiwan web interface. I have a test in my Nagios to monitor whether the primary connection. This connection is done based on the result of curl ifconfig.me But it seems that ifconfig.me is always giving the ip address of my secondary connection. I tested it through a browser. Yes ifconfig.me gives the secondary internet's(ppp0) ip address. But whatismyipaddress.[com|org] give my primary ip address (eth2). I checked the default route on the router through ip route list 0/0 which also shows the primary connection (eth2) as the default route. The traceroute www.google.com and traceroute ifconfig.me both seems to trace through the primary connection (eth2). As our secondary internet connection has only got a limited download, I don't want to end up having to pay a large sum at the end of the month. Has somebody got an idea why the ifconfig.me shows my secondary address? What is the best way to ensure that my router(and thus the lan) use the right internet connection.

    Read the article

  • Caching all files in varnish

    - by csgwro
    I want my varnish servers to cache all files. At backend there is lighttpd hosting only static files, and there is an md5 in the url in case of file change, ex. /gfx/Bird.b6e0bc2d6cbb7dfe1a52bc45dd2b05c4.swf). However my hit ratio is very poorly (about 0.18) My config: sub vcl_recv { set req.backend=default; ### passing health to backend if (req.url ~ "^/health.html$") { return (pass); } remove req.http.If-None-Match; remove req.http.cookie; remove req.http.authenticate; if (req.request == "GET") { return (lookup); } } sub vcl_fetch { ### do not cache wrong codes if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } remove beresp.http.Etag; remove beresp.http.Last-Modified; } sub vcl_deliver { set resp.http.expires = "Thu, 31 Dec 2037 23:55:55 GMT"; } I have made an performance tuning: DAEMON_OPTS="${DAEMON_OPTS} -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p session_linger=100" The main url which is missed is... /health.html. Is that forward to backend correctly configured? Disabling health checking hit ratio increases to 0.45. Now mostly "/crossdomain.xml" is missed (from many domains, as it is wildcard). How can I avoid that? Should I carry on other headers like User-Agent or Accept-Encoding? I thing that default hashing mechanism is using url + host/IP. Compression is used at the backend. What else can improve performance?

    Read the article

  • Installation of skpe is blocked

    - by pragadheesh
    Hi, I my office laptop running on XP, installation of Skype is blocked. Also I'm not able to download the software from internet. So I downloaded the software from a different machine and tried copying it to my desktop. But during Copy, the software 'skype' got deleted from my pen drive itself. How is this done.? And how can i get rid of this? Thanks in advance

    Read the article

  • Remote script execution on Windows 2003 server - alternatives to PSEXEC

    - by chickeninabiscuit
    We are wanting to deploy our application to our Test server from our Hudson server. I'd like to be able to have hudson copy the application files and start a script that would run locally on our Test server. We can't use psexec because of a cross domain policy. Currently we are doing this manually, by RDPing to the Test server and checking out the code from subversion manually. Are there alternatives to PSExec that can bypass the cross domain policy problem?

    Read the article

  • SQuirreL Client: Opening up a table in a separate tab

    - by Dalin Seivewright
    I've switched to SQuirrel Client to view an Oracle database with a variety of tables. At any given time, I need to look at the contents of several related tables at the same time. The problem is that, at least by default, SQuirrel Client does not open up multiple tables at a time. When you click on a different Table object, the main view is refreshed with newly selected table data. Oracle's SQLDeveloper (which I am trying to move away from) did this by default if I recall, but there was an option to "freeze" panes. Does a similar option exist in SQuirrel Client? I do not require the ability to view the contents of two different tables on the same screen (i.e. a split view) but I would like the ability to have tabs for each table so I can quickly switch between each table view rather then having to find the table in the table list over and over again. Note: I am using this in a professional capacity but if it does not "belong" here then I suppose it could be moved to superuser.

    Read the article

  • Adding 2nd DC to the domain from a different subnet over VPN.

    - by EagerToLearn
    I'm in the process of adding a second DC to our domain and just want to make sure I have all the steps right before proceeding. Info: DC1 is 2008 R2 Standard. DC2 is 2008 R2 Standard. Network1 is 192.168.39.x/24 Network2 is 10.0.0.x/24 VPN is Sonicwall. The 2 DC's will be at two different sites, but the networks are connected by hardware VPN. (Sonicwall). The main DC server will be on the 192.168.39.0/24 network. The 2nd DC will be on 10.0.0.0/24. Here are the steps I plan to take; please let me know if I'm missing anything. Part 1: AD Sites and Services on DC1, create a new site and subnet for DC2. (Or should I create a new one for both?) (Can I use the default IPSiteLink and not change anything in there other than refresh timer?) Part 2: Point the DNS of DC2 to DC1. Run /forestprep and /domainprep (on both, or just DC1?). Dcpromo and select "Additional Domain Controller for Existing Domain". Then continue with normal steps with default locations for databases. EDIT: Didn't realize this was like reddit and required two skipped lines to skip one :P EDIT 2: When DCPromo-ing DC2, do I need to have "Append primary and connection specific DNS" and "Append parent suffixes of the primary DNS suffix" checked?

    Read the article

  • Multiple rack apps on nginx + passenger, one as root, the other not...config help

    - by cannikin
    So I've got two apps I want to run on a server. One app I would like to be the "default" app--that is, all URLs should be sent this app by default, except for a certain path, lets call it /foo: http://mydomain.com/ -> app1 http://mydomain.com/apples -> app1 http://mydomain.com/foo -> app2 My two rack apps are installed like so: /var /www /apps /app1 app.rb config.ru /public /app2 app.rb config.ru /public app1 -> apps/app1/public app2 -> apps/app2/public (app1 and app2 are symlinks to their respective apps' public directories). This is the Passenger setup for sub URIs described here: http://www.modrails.com/documentation/Users%20guide%20Nginx.html#deploying_rack_to_sub_uri With the following config I've got /foo going to app2: server { listen 80; server_name mydomain.com; root /var/www; passenger_enabled on; passenger_base_uri /app1; passenger_base_uri /app2; location /foo { rewrite ^.*$ /app2 last; } } Now, how do I get app1 to pick up everything else? I've tried the following (placed after the location /foo directive), but I get a 500 with an infinite internal redirect in error.log: location / { rewrite ^(.*)$ /app1$1 last; } I hoped that the last directive would prevent that infinite redirect, but I guess not. /foo gets the same error. Any ideas? Thanks!

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? Thanks, M

    Read the article

  • Can't see the Chinese characters in VIM

    - by SpawnST
    I find that when I type Chinese characters(encoded with utf-8) into VIM,I cannot see them at all while they do exist there.I can copy and paste them into other text editors and it seems everything is fine.How can I fix this problem?Thanks!

    Read the article

  • Serve a specific set of error pages for different subdirectories

    - by navitronic
    I am currently trying to setup 2 different sets of Error documents for separate folders within a website. I have 2 folders within the root of a site: demo/ live/ Any requests that return 404's or 403's within the demo folder needs to load one set of pages for the Apache errordocuments, eg. ErrorDocument 404 /statuses/demo-404.html ErrorDocument 403 /statuses/demo-403.html And the live needs to go to similarly name files. ErrorDocument 404 /statuses/live-404.html ErrorDocument 403 /statuses/live-403.html So far I have tried placing an .htaccess file in both directories with the ErrorDocument directives setup pointing to the specific files, the 404 works fine and references the correct page. However, the 403s do not work and revert to the server default when trying to access forbidden folders within the demo directory, the logs indicate the following: [Wed Jun 16 04:47:44 2010] [crit] [client 115.64.131.144] (13)Permission denied: /home/abstract/public_html/demo/xxx/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Is this correct? Would apache revert to default because it is trying to look for the htaccess in a folder it doesn't have permission in? Why wouldn't it work it's way back through the folder tree? Can I make it do this?

    Read the article

  • Is there an app or script that will extract .rar files for Mac Os X...?

    - by smileemee
    ...after it completed download that runs similiar to how zip files unextract itself when the download finishes and actually throws the zipped copy in the trash? I usually use unrax but lately with all these .rar files, the cool zip extractor is not very helpful right now. A script with automator? anything to make this easier when extracting compressed files such as .rar. Thank you. Running Mac OS X 10.6

    Read the article

  • How do you access timemachine backups of a different computer?

    - by baloo
    I'm currently backing up using WD My Book World network drive that supports Apple Time Machine. I would like to copy some files from my old laptop backup. However, my old backup isn't showing when you browse the time machine network drive, only the currently used machine is listed (I know there are 3 different backups). How can I access those files not belonging to the currently used laptop?

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >