Search Results

Search found 46178 results on 1848 pages for 'java home'.

Page 1295/1848 | < Previous Page | 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302  | Next Page >

  • Users are getting a temporary profile

    - by Serhiy
    A bit about current setup: It is windows 2008 R2 AD servers (all of them are 2008R2) and couple locations which set as Sites. Each location has DFS on AD server. Roaming profiles are not used nor configured. Users have their home folder configured as mapped S: drive to DFS shared folder. For example: in profile tab user has: Home Folder - connect - S: to \\domain.com\dc\users\%username% We also have redirected Desktop, Documents and Downloads folders to \\domain.com\dc\users. Everything was fine. Suddenly (today), users in most locations lost their local profile (both XP and W7 desktops) and got temporary profiles. Also, it looks like local profile was created today (from folder properties). I checked events at couple machines and there is not errors related to profiles or logon process. I do not see issues in event logs at servers as well. Basically, I run out of ideas what is wrong and why machines lost their local profiles. PS: Laptop users do not have their folders redirected, but lost profiles as well.

    Read the article

  • Family server setup

    - by Manny
    Hi all, I really hope some of you can give me some direction. I have setup a linux server at home and through samba I can access files from different computers in my home. I would like to use this server as a file-server for my family (brothers, sisters and parents who all live in their own homes). I really like the way it is set up right now with user and permission controls, but I've read that it is bad idea to open up the samba port to the world. The requirements are simple: 1) it should be easy to access, by using standard web browsers or mounting the drive (shouldn't have to use any VPN setup or use putty etc) 2) should be somewhat secure. We just want to share family pictures instead of putting them on facebook or picasa or other web server, nothing top secret. Here is what I've looked into: 1)Webdav. It seems decent but seems like it windows7 doesn't like it very much, even with digest mode authentication. User controls and permissions are not as flexible as samba (or at least to my knowledge). I really like the user and group permissions in samba, but if I could live with webdav if it worked seamlessly with windows, it should just work shouldn't it? 2) I read somewhere to stay away from ftp as it is outdated and that there are newer and better internet file-server setups? Was that a reference to webdav? I am so confused, please help... Manny

    Read the article

  • Drowning in documents - recommend doc management solutions?

    - by Martin Day
    I've been researching document management lately. I want to organise my docs at home and also at the office. Finding affordable solutions one can actually test drive is quite hard. Some that I've downloaded just don't seem to work (testing on brand new Vista PC). I've seen some software on Amazon like Paperport but not really sure what they're like. For home I'd like something to organise files, full text search, good scanner integration, nice interface etc. But for the office it seems harder. I need something that does proper workflow and keeps versions. It will have an audit trail. Documents can be approved, checked in/out etc. I know a few clients who would like something similar. It would be great just to import thousands of documents from a shared drive and get them indexed with dupes killed. I'd like to be super clear about how/where the documents are being stored so that maintenance and backups are clear. My Google/twitter searches lead back to the same tired and vague webpages pushing what look like expensive and custom made solutions. Some might be very good I suppose but it's darn hard to tell. I don't mind a hosted package but all in all I don't think something like Google Docs, as good as it is now, will work. There are too many quirks and missing features (as compared to Office). Being able to work directly with the common Office file formats is important. I've noted a similar sounding question asked here back in August but it didn't seem to turn up too many solutions that I could easily and quickly apply. Also there could have been some changes since then so I feel it's worth asking.

    Read the article

  • Symbolic link modification for HP unix

    - by kalpesh
    Hi David Zaslavsky, Recently i was working on modifying the Symbolic links ... for a particular files.. while searching on internet i saw your post ... I am trying to use this script which you had posted .. find /home/user/public_html/qa/ -type l \ -lname '/home/user/public_html/dev/*' -printf \ 'ln -nsf $(readlink %p|sed s/dev/qa/) $(echo %p|sed s/dev/qa/)\n'\ script.sh SO i tried to modify your script for my problem .. in Hp unix env.. but it seems that the -lname command does not works for HP unix. do you know something equivalent that i can use ... Just to give you and idea of my problem ... I want to change all the symbolic links inside a particular folder .. New Symbolic link -- /base/testusr/scripts Old Symbolic link -- /base/produsr/scripts Now folder "A" contains more than 100 different files having soft links which points to this path -- /base/produsr/scripts But what I want is that the files inside folder A to point to this soft link --/base/testusr/scripts I am trying to achieve in Hp unix ... would really appreciate your help on this ...

    Read the article

  • Error on LDAP Login - xsessions error - Session lasted less than 10 seconds

    - by Draineh
    I have two machines both running CentOS 5.6 64bit. On the LDAP Machine it has a DHCP, BIND and OpenLDAP Server. LDAP is correctly configured and users can authenticate against it. Using root I configure machine 2 to use LDAP for authentication and when trying to log in it successfully authenticates against a saved user on the LDAP Server but produces the following errors and then throws me back to the login screen. I can still sign in as root and use the machine as normal. The syslog doesn't show any errors and I disabled SELinux to see if it was interfering. The error; Your session only lasted less than 10 seconds. If you have not lgoged out yourself, this could mean that there is some installation problem or that you may be out of diskspace. Try logging in with one of the failsafe sessions to see if you can fix this problem. There is then a tickbox to view the contents of ~/.xsessions-errors which contains; /etc/gdm/PreSession/Default: Registering your session with utmp /etc/gdm/PreSession/Default: running: /usr/bin/sessreg -a -u /var/run/utmp -x "/var/gdm:0:Xservers" -h "" -l ":0" "admin" localuser:admin being added to access control list No profile for user 'admin' found /bin/sh: /usr/bin/dbus-launch --exit-with-session /etc/X11/Xinit/Xclients: No such file or directory /bin/sh: line 0: exec: /usr/bin/dbus-launch --exit-with-session /etc/X11/xinit/Xclients: cannot execute: No such file or directory Apologies if someone notices something isn't spelt quite right or doesn't sound right, the system never actually creates or saves this file so I have had to type it across from the screen. Through the authentication panel in CentOS on the client I have set it to create the users home directory on login. The user is being correctly authenticated and the /home/admin folder has been created but this error would suggest it has not? The client is a new install on an 80gb hard drive so there is well over 80% of the drive still available. Thanks for any suggestions or pointers.

    Read the article

  • Apache and MySQL not working well after extending filesystem

    - by xtrimsky
    I had 4Gb on my /var (/dev/mapper/vg00-var) filesystem, and I wanted to extend it to 160Gb. I did it following this tutorial: http://faq.1and1.com/dedicated_servers/root_server/linux_admin_help/7.html Now I have 160: Filesystem Size Used Avail Use% Mounted on /dev/md1 4.0G 424M 3.6G 11% / /dev/mapper/vg00-usr 4.3G 1.4G 3.0G 32% /usr /dev/mapper/vg00-var 198G 6.5G 192G 4% /var /dev/mapper/vg00-home 4.3G 4.4M 4.3G 1% /home none 1.1G 0 1.1G 0% /tmp Now I have a problem, in order for Apache to work, each time I reboot, I need to also reboot apache: "apachectl -k restart" which is already terrible. I think this is because /var contains the htdocs The worst part is, mysql is not starting at all. Mysql has some files also in /var What have I done wrong ?? :( Thank you EDIT: Attaching /var/log/mysqld.log: 120602 11:17:44 InnoDB: Waiting for the background threads to start 120602 11:17:45 InnoDB: 1.1.8 started; log sequence number 8354009 120602 11:17:45 [ERROR] /usr/libexec/mysqld: unknown variable 'set-variable=local-infile=0' 120602 11:17:45 [ERROR] Aborting 120602 11:17:45 InnoDB: Starting shutdown... 120602 11:17:46 InnoDB: Shutdown completed; log sequence number 8354009 120602 11:17:46 [Note] /usr/libexec/mysqld: Shutdown complete 120602 11:17:46 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended

    Read the article

  • My yum repository able to search packages, but not able to install it in RHEL?

    - by mandy
    I set up yum from dvd. Following is the containts of my .repo file: [dvd] name=Red Hat Enterprise Linux Installation DVD baseurl=file:///media/dvd enabled=0. I'm able to search packages. However while installation I'm getting below error: [root@localhost dvd]# yum install libstdc++.x86_64 Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. Setting up Install Process Nothing to do My Yum Search output: [root@localhost dvd]# yum search gcc Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. ============================================================================= Matched: gcc ============================================================================= compat-libgcc-296.i386 : Compatibility 2.96-RH libgcc library compat-libstdc++-296.i386 : Compatibility 2.96-RH standard C++ libraries compat-libstdc++-33.i386 : Compatibility standard C++ libraries compat-libstdc++-33.x86_64 : Compatibility standard C++ libraries cpp.x86_64 : The C Preprocessor. libgcc.i386 : GCC version 4.1 shared support library libgcc.x86_64 : GCC version 4.1 shared support library libgcj.i386 : Java runtime library for gcc libgcj.x86_64 : Java runtime library for gcc libstdc++.i386 : GNU Standard C++ Library libstdc++.x86_64 : GNU Standard C++ Library libtermcap.i386 : A basic system library for accessing the termcap database. libtermcap.x86_64 : A basic system library for accessing the termcap database. Please guide me on this, I want to install gcc on my RHEL.

    Read the article

  • logfile deleted on Oracle database how to re-create it?

    - by Daniel
    for my database assignment we were looking into 'database corruption' and I was asked to delete the second redo log file which I have done with the command: rm log02a.rdo this was in the $HOME/ORADATA/u03 directory. Now I started up my database using startup pfile=$PFILE nomount then I mounted it using the command alter database mount; now when I try to open it alter database open; it gives me the error: ORA-03113: end-of-file on communication channel Process ID: 22125 Session ID: 25 Serial number: 1 I am assuming this is because the second redo log file is missing. There is still log01a.rdo, but not the one I have deleted. How can I go about recovering this now so that I can open my database again? I have looked into the database created scripts, and it specified the log02a.rdo file to be size 10M and part of group 2. If I do select group#, member from v$logfile; I get: 1 /oradata/student_db/user06/ORADATA/u03/log01a.rdo 2 /oradata/student_db/user06/ORADATA/u03/log02a.rdo 3 /oradata/student_db/user06/ORADATA/u03/log03a.rdo 4 /oradata/student_db/user06/ORADATA/u03/log04a.rdo So it is part of group 2. If I try to add the log02a.rdo file again "already part of the database". If I drop group 2 and then add it again with these commands: ALTER DATABASE ADD LOGFILE GROUP 2 ('$HOME/ORADATA/u03/log02a.rdo') SIZE 10M; Nothing. Supposedly alters the database, but it still won't start up. Any ideas what I can do to re-create this and be able to open my database again?

    Read the article

  • proftpd initial directory for each user

    - by Dels
    After successfully setting up proftpd server, i want to add initial directory for each users, i have 2 user, webadmin that can access all folder and upload that can only access upload folder ... # Added config DefaultRoot ~ RequireValidShell off AuthUserFile /etc/proftpd/passwd # VALID LOGINS <Limit LOGIN> AllowUser webadmin, upload DenyALL </Limit> <Directory /home/webadmin> <Limit ALL> DenyAll </Limit> <Limit DIRS READ WRITE> AllowUser webadmin </Limit> </Directory> <Directory /home/webadmin/upload> <Limit ALL> DenyAll </Limit> <Limit DIRS READ WRITE> AllowUser upload </Limit> </Directory> All set ok, but i need to tell my ftp client initial directory for each user (otherwise it keep fail to retrieve directory), which i think it should be automatically set for each user (no need to type initial directory in ftp client)

    Read the article

  • Gigabyte H55N-USB3: No video on HDMI

    - by newt
    I built a new PC with a Gigabyte H55N-USB3 / Intel Core i5 650. With a monitor plugged in the DVI port, everything works fine. I installed Windows 7 32-bit and enabled remote desktop connection. After that, I unplugged the monitor, plugged it into network and installed everything else (drivers, programs, etc) via RDP. However, when I try to use the HDMI port on my TV nothing appears. Neither during the boot, neither after Windows starts. The TV says there's "no signal" (if I remove the cable the message changes to "check cable"). The cable is new, and it is working fine with my home theater on same TV (by the way, it is the cable which came bundled with the home theater). Video driver is the latest from Intel site. Anyway, this shouldn't be the problem since there is no image during the boot. Any ideas or tips would be welcome. I'm googling around but found nothing useful, yet.

    Read the article

  • Large scale file replication with an option to "unsubscribe" from a replicated file on a given machine

    - by Alexander Gladysh
    I have a 100+ GB files per day incoming on one machine. (File size is arbitrary and can be adjusted as needed.) I have several other machines that do some work on these files. I need to reliably deliver each incoming file to the worker machines. A worker machine should be able to free its HDD from a file once it is done working with it. It is preferable that a file would be uploaded to the worker only once and then processed in place, and then deleted, without copying somewhere else — to minimize already high HDD load. (Worker itself requires quite a bit of bandwidth.) Please advise a solution that is not based on Java. None of existing replication solutions that I've seen can do the "free HDD from the file once processed" stuff — but maybe I'm missing something... A preferable solution should work with files (from the POV of our business logic code), not require the business logic to connect to some queue or other. (Internally the solution may use whatever technology it needs to — except Java.)

    Read the article

  • Trying to Set up SMTP Server on WIndows Server 2012

    - by datc
    I'm working on a website, and I need to test the functionality of sending email messages from ASP.NET, something like this: Dim msg As New MailMessage("email1", "email2") msg.Subject = "Subject"<br> msg.IsBodyHtml = True<br> msg.Body = "Click <a href='site'>here</a>." Dim client As SmtpClient = New SmtpClient() client.Host = "My-Server"<br> client.Port = 25<br> client.DeliveryMethod = SmtpDeliveryMethod.Network<br> client.Send(msg) This is running from a Windows 8 workstation. I've installed SMTP server on my Windows Server 2012 machine. The mail shows up in the mailroot/Queue folder and sits there, eventually getting deposited into Badmail. Now I have AT&T U-verse at home, and a few devices connected to the gateway, including let's call it "My-Server." When I run SmtpDiag from say, datc@... to [email protected] I get SOA serial number match passed, Local DNS (99-135-60-233.lightspeed.bcvloh.sbcglobal.net) & Remote DNS (hotmail.com) tests *not* passed, and ultimately, Connecting to the server failed. Error: 10060. Failed to submit mail to mx2.hotmail.com error. When I set My-Server's IP to static and equal to the external IP, 99.135.60.233, and again run SmtpDiag, I get SOA, Local DNS, and Remote DNS tests passed, but the same 10060 error. Same for yahoo.com, gmail.com, and so forth. Is it my ISP's job to fix this? Some PTR record missing somewhere? Is it at all possible to have a home-based SMTP server? All I want is to test my email code. Perhaps, my IP address is just not "trusted" somehow. Thanks.

    Read the article

  • Utilities for finding x/y screen coordinates

    - by slothbear
    I have an application that needs the user to enter the x and y location of a couple of items on the screen. [Yes, this is crap and I'll replace it, but for now...] On Mac OS X, I'm using Snapz Pro X. When I choose to take a snap of a selection, the interface displays the mouse cursor location. This is OK for my personal use, but I can't ask users to buy a $69 program for this function. I thought I had a solution with the built-in program "Grab", but it reports the coordinates from the bottom left, and I need it from the top left. I don't have access to Windows; not sure what to use there. I've not even been able to come up with good search terms since mouse and coordinates and x/y are so common. Ideas are welcome there. Extra credit: same x/y finder for Linux. A Java program would be OK too, since the main application is in Java.

    Read the article

  • Mac OS X - Time Machine backup fails verification - What can I do to save the history?

    - by usermac75
    Hi, How do I make Time Machine to make a new complete backup without losing older versions of backed up files? Verbose: I am using the Time Machine backup on my OS X (Snow Leopard) to backup the whole computer to an external drive. I especially like the "history", i.e. the feature that allows you to restore the older version of a file. Problem: I had some data corruption on my external backup drive, I repaired it with the System Tool for doing that, it found some faults. I had the disk tool repair the external drive. After that, the external drive was OK and I could use Time Machine again. I let Time Machine do one more backup. Now I made a verification according to http://superuser.com/questions/47628/verifying-time-machine-backups, namely along sudo diff -qr . $HOME/Desktop 2>&1 | tee $HOME/timemachine-diff.log However: After doing the command above, several differences and missing files were reported, approx. 200 files in sum. Whereas some of the missing files were cache or excluded directories, the differences do bother me, especially as some important documents from me are listed as differing. How can I make sure that the data on the external drive is synced correctly? Is it possible to have Time Machine to do a complete new backup without losing the history? Or to have Time Machine compare all files for differences and re-write all files that are different? Or can I set some flags on the files that do not match to have them copied again? (like the archive-flag in Windows/Dos). I'd rather not touch the files because I would like to keep the date of last change/date of creation) Thank you for your thoughts!

    Read the article

  • Relevance and Necessity of SNMP

    - by Adam Tannon
    Edit: I am in the process of designing a Java-based monitoring tool that will send back periodic "health checks" of a Java app deployed to a cluster of GlassFish servers. I am trying to figure out the best protocol for this monitoring tool to send information back to the monitoring server on. After an initial research effort on my part, it seems like SNMP is just a protocol for monitor-type applications to communicate the "health status" of something (a part of a network, a server, a cluster, an application, etc.) to the rest of the network. If the above is incorrect, please correct me!!! Assuming the generalization is more or less accurate, my next question is: why is this a protocol!?!? In the age of REST/SOAP/TCP protocols, why is there the need for a standardized protocol that only fits one type of application (monitoring)? In other words, if I'm a developer assigned to building a new monitoring tool that periodically polls a server and reports on its CPU and available memory, what advantages does SNMP give me over just POSTing to a RESTful API via plain 'ole HTTP? I'm sure I'm missing something here - I just need someone to help connect the dots! Thanks in advance!

    Read the article

  • Connect USB hard drive to wireless router on RJ45 port? Possible?

    - by lawphotog
    just a quick story behind. I was trying to set up wireless networked hard drive at home. My wireless router doesn't take USB. I am considering few options. First i was considering to get something like WD My Cloud. My router is an old one provided by service provider. It only has 10/100 Ethernet. WD My Cloud has Gigabit interface. So unless i changed a new router, data transfer will be slow. So upgrading the router is a must if i want fast transfer speed. Plus I already own an external hard drive with USB 3.0 interface. So if I get a router like Netgear D6300, i can get a decent speed wireless shared drive at home. And i can use my existing HDD instead of WD My Cloud. But the router isn't cheap so I am saving up for that. In the meantime I found out the existence of USB to RJ45 adaptor. I read the reviews and some say it works for them and for some don't. They didn't really say what they were trying to do so I'm confused. So if i bought an adaptor like this, can i connect my existing HDD (USB) with my existing router (RJ45) and use it as a shared drive for data transfer? I know it will be slow as the adaptor will only have USB 2.0 and 10/100 for Ethernet. But it's fine as it's for temporary until i got my new router.

    Read the article

  • NGINX: How do I calculate an optimal no. of worker processes and worker connections?

    - by bodacious
    Our web app is running on a Linode 2048 server at the moment (~ 2048 GB of RAM) The MYSQL database is on another linode of it's own so this server is really only handling NGINX and and the Rails application. The application itself uses about 185976 of memory per instance (RSS). Our traffic is < 1000 per day and the pages are mostly cached so there are fewer hits to the rails app itself. My question is - how can I calculate optimal NGINX config settings for my app? Below is the current config: worker_processes 1; # pid of nginx master process pid /var/run/nginx.pid; events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; passenger_root /home/user/.rvm/gems/ree-1.8.7-2011.01@URTV/gems/passenger-3.0.3; passenger_ruby /home/user/.rvm/rubies/ree-1.8.7-2011.01/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; # gzip settings gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_vary on; gzip_proxied any; gzip_types text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # load extra modules from the vhosts directory include /opt/nginx/vhosts/*.conf; } Any advice would be appreciated! :)

    Read the article

  • Virtual host doesn't read .htaccess

    - by Charlie
    I just created virtual host: <VirtualHost myvirtualhost:80> ServerAdmin webmaster@myvirtualhost ServerName myvirtualhost DocumentRoot /home/myname/sites/public_html <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/myname/sites/public_html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> It works, but it cant read .htacces file in public_html: DirectoryIndex otherindex.php I tried change all AllowOverride to All, but I get 500 error. How can I fix this ? thanks.

    Read the article

  • Thousands of visits a day from untraceable traffic to website - Serious issue

    - by kel
    At the end of January we noticed a spike in traffic to what JetPack stats says was home/archive page and what Google was classifying as going to /gaming/ which is an archive list in WordPress. This started off as ~3,000 unique visitors and jumped up to 65,000 unique visitors in one day, again all to the "home" page. This happened over a course of a couple of weeks and we thought we were getting attacked. The traffic then dropped off for a few days but then came back but came back as only about ~15,000 uniques a day and has been like that every day since. We came to the conclusion that something wasn't tracking right somewhere and this is legitimate traffic and brushed it off. Now here comes the problem, Google AdSense has just disabled our account for "invalid clicks". We are trying to figure out where this traffic is coming from and stop it if it's not legitimate or figure out a way to track it correctly. Specs for the site: Dedicated server running CentOS 6 with nginx, php-fpm and MySQL. The site is built in WordPress and we use CloudFlare and W3 Total Cache. Analytics being used are Google Analytics, Quantcast, Alexa and Compete. Any kind of help would be awesome. UPDATE: I'm finding more people with the same type of problem and there doesn't seem to be a solution. http://netmeg.com/bot-attack/ http://stkywll.com/2012/03/02/annoying-cyborgs-attach-distort-analytics/ After looking at the access logs I noticed they were all CloudFlare IP's. I looked into that and found out CloudFlare acts as a proxy and there was a way to fix the logs in nginx. They are coming from many different ISP's in the US. They are going to /games/ or /gaming/ (/games/ redirects to /gaming/) and all seem to have the same user agent of Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0).

    Read the article

  • Revamping an old and unstable office IT-solution using Windows Server and OpenVPN

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • Unified inbox shows twice on Thunderbird

    - by That Help Vampire Guy
    I'm using Thunderbird 24. If I show folders in Unified mode, my inbox folder shows up twice. If I choose the "All" folders mode, I see only one inbox. The issue started when I was using Ubuntu 12.04, but now I'm on Fedora 19. (I have migrated the folders on /home). I do remember having it not-duplicated, but then it started while still on Ubuntu. I noticed it when using the Converation plugin, but I had previously used the plugin without it happening. I have disabled the plugin and it persists. What I have tried If I close Thunderbird, rename the .thunderbird folder on my /home to something else, then it will create a new config profile, I have to set up everything again, and then it works as expected, see images below: Before resetting Unified vs All Folders After resetting Unified vs All Folders (I'm trying to avoid resetting the profile and creating a fresh new one, because the server -- MS Exchange -- doesn't support IMAP labels, so I'd lose all the tags on my messages, and I have organized it based on tags instead of folder).

    Read the article

  • Cloning a git repository from a machine running OS X

    - by Mike
    Hi folks, I'm trying to host a git repository from my home OS X machine, and I'm stuck on the last step of cloning the repository from a remote system. Here's what I've done so far: On the OS X (10.6.6) machine (heretofore dubbed the "server") I created a new admin user Logged into the new user's account Installed git Created an empty git repository via "git init" Turned on remote login Set port mapping on my router (airport extreme) to send ssh traffic to the server Added a ".ssh" directory to the user's home directory From the remote machine (also an OS X 10.6.6 machine), I sent that machine's public key to the server using scp and the login credentials of the user created in step 1 To test that the server would use the remote machine's public key, I ssh'd to the server using the username of the user created in step 1 and indeed was able to connect successfully without being asked for a password I installed git on the remote machine From the remote machine I attempted to "git clone ssh://[email protected]:myrepo" (where "user", "my.server.address", and "myrepo" are all replaced by the actual username, server address and repo folder name, respectively) However, every time I try the command in step 11, I get asked to confirm the server's RSA fingerprint, then I'm asked for a password, but the password for the user I set up for that machine never works. Any advice on how to make this work would be greatly appreciated!

    Read the article

  • CPANEL ModSec2 not working with SecFilterSelective

    - by jfreak53
    Ok, I have cPanel/WHM latest on a Dedi, here are my specs on apache: Server version: Apache/2.2.23 (Unix) Server built: Oct 13 2012 19:33:23 Cpanel::Easy::Apache v3.14.13 rev9999 I just ran a re-compile using easyapache as you can see by the date. When running it I made sure that ModSec was selected and it stated in big bold letters something to the effect of If you install Apache 2.2.x you get ModSec 2 So I believed it :) I recompiled, I then ran: grep -i release /home/cpeasyapache/src/modsecurity-apache_2.6.8/apache2/mod_security2.c Hmm, the file is there but grep doesn't output anything, if I run: grep -i release /home/cpeasyapache/src/modsecurity-apache_1.9.5/apache2/mod_security.c I of course get the ModSec 1 version output. But the thing is that ModSec2 is installed since the c file is there. So I continued and put the following in modsec2.user.conf: SecFilterScanOutput On SecFilterSelective OUTPUT "text" Now when I restart Apache I get this error: Syntax error on line 1087 of /usr/local/apache/conf/modsec2.user.conf: Invalid command 'SecFilterScanOutput', perhaps misspelled or defined by a module not included in the server configuration Now supposedly this is supposed to work, I even have it running in ModSec2 on a non-cpanel server setup manually. So I know ModSec2 supports it. Anyone have any ideas? I have asked this question over at cpanel forum and it got nowhere.

    Read the article

  • How to set up Windows 7 Professional as a NAS

    - by Enyalius
    I searched and didn't find any answers, so please forgive me if this is a repeat. Anyway, I have an older computer that I'm using as an HTPC, and I was hoping that I could use it as a NAS/multimedia server, as well. My primary uses would include accessing content on my PS3 (same LAN), accessing content from other computers on my home network and (if I can) accessing content from my Android phone over the internet. I have used SubSonic to stream music to my Android phone and other computers before, but I would really like to find a way to do this natively if possible. I know that I can buy external hard disk cases that can plug in the USB port of my router, that I can get a Drobo or other network storage solution, but I would really just rather not spend the money (especially considering that I already have a computer that I should be able to use). Hardware involved: Apple AirPort Extreme base station router (most recent revision) Home Theater Personal Computer: Core 2 Duo @ 2.4GHz, 8GB DDR2 RAM, ~3.5TB hard drive space Sony Playstaiton 3 Thin 120GB HTC Thunderbolt (I have 4G coverage) rooted and running Android 2.2.1 Various Apple laptops Various Windows 7 desktops/laptops Thanks in advance! Note- I have looked at open source NAS software but I would like to preserve the Windows Media Center functionality in Windows 7, so other NAS software is not an option for me currently. .

    Read the article

  • Mysterious swap usage on EC2

    - by rusty
    We're in the middle of a project to move our infrastructure from a co-lo situation into Amazon EC2 and we've noticed some weird memory characteristics of the processes in our setup. Without going into too much detail about the specifics of our processes, we've noticed that on our EC2 instances "top" will show processes using a lot of swap space -- in fact, much greater than the amount of available swap or (if you add it all up) more than the available disk. Here's a sample top output: Mem: 7136868k total, 5272300k used, 1864568k free, 256876k buffers Swap: 1048572k total, 0k used, 1048572k free, 2526504k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 4121 jboss 20 0 5913m 603m 14m S 0.7 8.7 3:59.90 5.2g java 22730 root 20 0 2394m 4012 1976 S 2.0 0.1 4:20.57 2.3g PassengerHelper 20564 rails 20 0 2539m 220m 9828 S 0.3 3.2 0:23.58 2.3g java 1423 nscd 20 0 877m 1464 972 S 0.0 0.0 0:03.89 876m nscd You can see, for instance, that jboss is reportedly using 5.2 gigs of swap space which is definitely impossible since there's only 1G allocated and none is being used (probably because there's still 1.8G of RAM free). And here's the results of uname -a: Linux xxx.yyy.zzz 2.6.35.14-106.53.amzn1.x86_64 #1 SMP Fri Jan 6 16:20:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux We're running an AMI based off of the default Amazon Linux AMI (Amazon Linux AMI release 2011.09, so some RHEL5 and RHEL 6) with not too many customizations and definitely no kernel-level customizations. Something here tells me that on this particular kernel/distribution, the reporting of swap or maybe even total memory usage isn't what it appears to be... Any help would be appreciated!

    Read the article

< Previous Page | 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302  | Next Page >