Search Results

Search found 12611 results on 505 pages for 'matlab figure'.

Page 357/505 | < Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >

  • System With Two Network Adapters [closed]

    - by Synetech inc.
    Hi, My system has a NIC (Marvell Yukon) built-into the motherboard, but I also have a D-Link (RealTek) card. I figure that using the D-Link and disabling the Marvell makes the most sense, though I'm wondering if maybe the built-in one has better throughput (not that my Internet connection is so fast). Also, I'm wondering about the merits of using both at the same time. My router has four ports and I have experimented with enabling and plugging both NICs into the router. I was able to connect to the Internet, but the pattern of usage seemed irregular (which adapter was chosen for the transfer and any given point). I also considered bridging the two, but am having difficulty in finding out what exactly creating network bridge does in the context of the Windows Network Connections window. I am familiar with the concept of connecting networks, so it seems to me that birding two connections on the same segment is pointless at best (and can cause problems like loops?) Does anyone have any tips on what to do if a system has more than one NIC and any clarification on the bridge option? Thanks a lot.

    Read the article

  • Apache directory access with virtual host

    - by alexeygaidamaka
    I have a virtual host with a configuration like that. When i'm trying to get into foobar.com/dir providing valid username/password pair i get 403 forbidden page instead of that directory contents. www.foobar.com/dir has 777 rights, .httpaswd is chmoded 644. But i can't figure out why i am still not seeing contents. Please, give me a hint. ServerAdmin webmaster@localhost ServerName www.foobar.com ServerAlias www.foobar.com DocumentRoot /var/www/foobar <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/foobar> Options -Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> <Directory /var/www/foobar/dir> AllowOverride AuthConfig AuthName "Authorize yourself, please!" AuthType Basic AuthUserFile /etc/apache2/.htpasswd AuthGroupFile /dev/null Allow from All Order Allow,Deny Require valid-user

    Read the article

  • Is there a way to log commands that a user runs in Windows 7?

    - by camster342
    I manage a large enterprise environment, and while we try to advise users not to, there are inevitably users that need to have local admin access to their machines. The problem is that some of these users like to "fiddle" and sometimes screw up their machines in "wonderful" ways. Is there an easy way to log what a user does on a machine, specifically in the command prompt? Maybe there is 3rd party tools I could use to log this information? With Linux that I used to use in past ages, you could look at a users bash history file to see what commands they have run. While I realise that specific log could also be altered by the user if they wanted to cover their tracks, that is the sort of log I'm looking for. If there are other ways I can also log what other system configuration type changes they make as well (not necessarily command line based), that's also useful. I know about event/system logs and so on, but that doesn't necessarily catch all the information I need to figure out how the user has buggered their machine this time.

    Read the article

  • .php file blank - .php5 files works

    - by Kleidi
    I have a problem with a server of mine. I've installed virtualmin/webin on it for administration and I have 1 domain on it. DNS management is external. On this domain I only have an html "Under Construction" index and 5 subdomains. In all those subdomains I have PHP systems running perfectly. I've tried to install Wordpress on the main domain and I'm having some issues: None .php files loads. I have made a phpinfo file on it to check it and it won't work either; only a blank page appears. When I check the source code of it in browser, appears the code. I have changed the extensions to .php5 and it worked perfectly. Something is going wrong with it but I can't figure out what. I have checked the apache error and nothing appears. 3 Days ago I upgraded from php 5.2.* to 5.4.21. Server is running CentOS 5.10.

    Read the article

  • Best way to create and restore a Drive Image with Windows 7? [closed]

    - by jasondavis
    Possible Duplicate: Want to create a system image I am about to build a new PC. I am a windows 7 user. For years now I have been wanting to install windows and all my favorite software, music, etc., and then make a drive IMAGE and be able to go in 6 months later or WHENEVER I want to start fresh and completely format my drives and restore my IMAGE and have all my settings, programs, etc be just5 like when I created the original image. I know there is many ways to do this but I have never done this 100% successfully and I have about a week to figure out how to do it perfectly for when I build my new PC. I have heard good things about using tried Acronis true image in the PAST for doing what I describe4, I tried using it but, but the newer versions are overly complex and don't even seem to work the way I hoped. I also see that Windows 7 has some sort of drive IMAGE creator itself as well. Does the newer Windows 7 image creator do what I am describing above? If it does do what I am asking for (complete drive image with windows, all programs and settings) saved to an IMAGE file that can easily be restored to ANY hard drive in the future? Please share your experiences, tips, ideas on how to achieve this the easiest and most reliable way please

    Read the article

  • Packet flooding while configuring a Debian L2TP/IPSec client?

    - by Joseph B.
    I'm currently at my wits end trying to configure an L2TP over IPSec VPN connection on my Debian using openswan and xl2tp box connecting to a server of unknown configuration. I've managed to successfully establish the connection and everything appears to be working well until I attempt to set the VPN connection as my default route, at which point I see a massive flood of packets simultaneously being transmitted (on the tune of ~1.5 GB in about 2min) until the server drops my connection. Prior to this network traffic on all my interfaces is minimal. According to iftop the majority of this traffic appears to be coming out of port 12, although I can't seem to figure out how to finger a specific process. If I instead just route traffic destined for 74.0.0.0/8 through it I'm able to access Google's servers through the VPN without issue. My xl2tp.conf file is: [lac vpn-nl] lns = example.vpn.com name = myusername pppoptfile = /etc/ppp/options.l2tpd.client My options.l2tpd.client file is: ipcp-accept-local ipcp-accept-remote refuse-eap require-mschap-v2 noccp noauth idle 1800 mtu 1410 mru 1410 usepeerdns lock name myusername password mypassword connect-delay 5000 And my routing table looks like: Destination Gateway Genmask Flags Metric Ref Use Iface 10.5.2.1 * 255.255.255.255 UH 0 0 0 ppp0 10.0.50.0 * 255.255.255.0 U 0 0 0 eth0 10.50.0.0 * 255.255.0.0 U 0 0 0 eth0 10.0.0.0 * 255.255.0.0 U 0 0 0 eth0 192.168.0.0 * 255.255.0.0 U 0 0 0 eth0 loopback * 255.0.0.0 U 0 0 0 lo default * 0.0.0.0 U 0 0 0 ppp0 I'm seeing absolutely nothing in auth.log and syslog during this time and can't seem to find any other log files it might be writing to. Any suggestions would be appreciated!

    Read the article

  • Is it possible to trace someone using Google during an online exam?

    - by George
    I happen to be a professor at a reputed college. I want to design an online exam for over 1000 students via around 50 computers right after the vacation ends. Now the problem is that I have heard that many students use Google on a different tab to find answers when no invigilator is around. I want to know if there is a way to backtrace it after the exams via some kind of history or any other possible way. In our university there is a standard system. I am not good with computers but I will try to explain. Each computer uses mozilla to connect to a server centrally located via an IP. The students open it and enter a unique ID and password to start the exams. Many questions are jumbled and different groups of students give exam in a different time slot. Is there any way to trace it since I want to set an example for students so they won't cheat and give exams in an honest way. Additional details: Since the number of computers are less than the number of students, more than 10 students are going to use a single computer on a single day over a period of 10 hours. After this, if I check the history (and let's say someone even forgot to delete the history and I see it), will I able to figure out who among the 10 has done it? Moreover, is it even practical and feasible?

    Read the article

  • Relevance and Necessity of SNMP

    - by Adam Tannon
    Edit: I am in the process of designing a Java-based monitoring tool that will send back periodic "health checks" of a Java app deployed to a cluster of GlassFish servers. I am trying to figure out the best protocol for this monitoring tool to send information back to the monitoring server on. After an initial research effort on my part, it seems like SNMP is just a protocol for monitor-type applications to communicate the "health status" of something (a part of a network, a server, a cluster, an application, etc.) to the rest of the network. If the above is incorrect, please correct me!!! Assuming the generalization is more or less accurate, my next question is: why is this a protocol!?!? In the age of REST/SOAP/TCP protocols, why is there the need for a standardized protocol that only fits one type of application (monitoring)? In other words, if I'm a developer assigned to building a new monitoring tool that periodically polls a server and reports on its CPU and available memory, what advantages does SNMP give me over just POSTing to a RESTful API via plain 'ole HTTP? I'm sure I'm missing something here - I just need someone to help connect the dots! Thanks in advance!

    Read the article

  • passenger and apache memory usage

    - by Brent Faulkner
    On a "CentOS release 6.2 (Final)" server (with Ruby 1.9.3 and Rails 3.2), and using more memory than expected. Looking at passenger-memory-stats I see a couple of HUGE httpd processes... any thoughts on how I can figure out what's going on and reduce the memory usage? Stats are included here... thanks! ---------- Apache processes ----------- PID PPID VMSize Private Name --------------------------------------- 1371 1 202.1 MB 0.1 MB /usr/sbin/httpd 4573 1371 210.2 MB 5.0 MB /usr/sbin/httpd 4778 1371 202.5 MB 0.6 MB /usr/sbin/httpd 4780 1371 217.6 MB 9.4 MB /usr/sbin/httpd 4781 1371 217.1 MB 9.1 MB /usr/sbin/httpd 4856 1371 202.4 MB 0.5 MB /usr/sbin/httpd 4863 1371 204.1 MB 2.1 MB /usr/sbin/httpd 5027 1371 202.4 MB 0.5 MB /usr/sbin/httpd 5043 1371 202.4 MB 0.4 MB /usr/sbin/httpd 5044 1371 205.5 MB 2.7 MB /usr/sbin/httpd 5072 1371 202.4 MB 0.5 MB /usr/sbin/httpd 5084 1371 202.4 MB 0.5 MB /usr/sbin/httpd 32111 1371 1297.0 MB 246.5 MB /usr/sbin/httpd 32579 1371 1914.3 MB 215.5 MB /usr/sbin/httpd ### Processes: 14 ### Total private dirty RSS: 493.42 MB -------- Nginx processes -------- ### Processes: 0 ### Total private dirty RSS: 0.00 MB ----- Passenger processes ----- PID VMSize Private Name ------------------------------- 4180 280.5 MB 24.4 MB Passenger ApplicationSpawner: /var/www/apps/people/current 4345 309.5 MB 53.4 MB Rack: /var/www/apps/people/current 4800 300.2 MB 55.2 MB Rack: /var/www/apps/people/current 4808 297.8 MB 52.5 MB Rack: /var/www/apps/people/current 4815 297.4 MB 52.4 MB Rack: /var/www/apps/people/current 4822 302.7 MB 55.6 MB Rack: /var/www/apps/people/current 22780 209.0 MB 0.0 MB PassengerWatchdog 22783 991.5 MB 1.3 MB PassengerHelperAgent 22785 113.4 MB 1.1 MB Passenger spawn server 22788 144.6 MB 0.0 MB PassengerLoggingAgent 22911 310.4 MB 64.0 MB Rack: /var/www/apps/people/current 22939 311.6 MB 53.5 MB Rack: /var/www/apps/people/current 26175 304.1 MB 55.8 MB Rack: /var/www/apps/people/current 26182 310.4 MB 44.0 MB Rack: /var/www/apps/people/current ### Processes: 14 ### Total private dirty RSS: 513.24 MB

    Read the article

  • Mercurial hook fails on Windows

    - by Nick Hodges
    I am trying to use the headcount hook (https://bitbucket.org/dgc/headcount/overview) with my main develop repository. I pulled the code and placed it in C:\Python26\Lib\site-packages. I made the following entries into my hgrc file: [hooks] pretxnchangegroup.headcount = python:headcount.headcount.hook [headcount] push_ok = * commit_ok = * warnmsg = %(headcount)d new heads detected. You may not push new heads to this repository. debug = False All this is as per the install instructions. I then cloned the repository, created a branch, committed a change to that branch, and then issued: hg push -f as a test. However, this fails with: C:\junk\htmlwriter>hg push -f pushing to c:\code\htmlwriter searching for changes adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files transaction abort! rollback completed abort: pretxnchangegroup.headcount hook is invalid (import of "headcount.headcou nt" failed) I then ran this: C:\Python26>python c:\Python26\Lib\site-packages\headcount\headcount.py Traceback (most recent call last): File "c:\Python26\Lib\site-packages\headcount\headcount.py", line 2, in <modul e> import mercurial.node ImportError: No module named mercurial.node I'm far from a python expert, so can someone help me figure out how to get the headcount hook to run inside my mercurial environment? Details: Windows 7, Mercurial 1.7.2, TortoiseHg 1.1.7

    Read the article

  • Apache Virtualhost entry with Windows hostname

    - by gshauger
    I have a Windows Domain Controller and we use it for DNS for our internal network. I have an Ubuntu box with an IP address of 172.16.34.149. Within the Windows DNS I created the forward and reverse lookup entries for the name Endymion. Naturally when ever I FTP/SSH/HTTP/etc to the hostname Endymion it resolves correctly to my Ubuntu box. I wanted to do some web development on this box for an existing site. There were problems when I placed the website in a subfolder of /var/www/. Let's just say it was in folder /var/www/projectx/. The issue involved the incorrect resolution of non-relative urls. So I figure I could create a new DNS entry for the hostname projectx. Sure enough when I FTP/SSH/HTTP/etc to the hostname projectx it takes me to the same ubuntu box as the hostname Endymion...this is what I would expect. I now have two hostnames for the same box. I then create a Virtualhost entry in httpd.conf that looks like the following: <VirtualHost *:80> DocumentRoot /var/www/projectx ServerName projectx ServerAlias projectx </VirtualHost> Sure enough when I go to a browser and type in http://projectx/ it takes me to the correct subfolder. Everything works!!! Not so fast. I then go to http://endymion/ and instead of taking me to /var/www/ it takes me to /var/www/projectx/ Clearly I'm missing something. Help please! ;)

    Read the article

  • Powershell Get-Process cannot connect to remote computer

    - by amandion
    I've been struggling with this for a few hours and can't figure this out. I have two Windows 7 computers. One is my workstation that is using Powershell to do administrative maintenance. The other is the machine I'd like to use Powershell remoting on to execute remote Powershell cmdlets on. On both computers, I've enabled Powershell remoting and added all computers to TrustedHosts with the * value. On the remote computer, I've started the Remote registry service and ensured that the DCOM, Winmgmt and the Winrm services are running. Firewall is disabled on remote machine too. The cmdlet I try to run is: Get-Process -ComputerName $name Where $name is the name of the remote machine. I keep getting an error saying that it could not connect to the remote PC. I've also tried using the IP and I get the same error. These PCs are not in a domain. I am able to do the following successfully: Invoke-Command {get-Process} -ComputerName $name -Credential $creds Where $name is the machine name and $creds is the user name and password for the remote computer's local Admin account. This gives me the same output I would expect. While this is an acceptable workaround, I am curious, why doesn't using get-process with remoting work as it should? I've seen a few articles on the web suggesting people have had success with it on its own. Each time I am using Powershell on my workstation with elevated privileges. Any ideas?

    Read the article

  • Would a PHP application benefit from being served from a RAM drive?

    - by Tom Marthenal
    I am in charge of hosting a PHP application that is large and slow, but easy to scale. The application is entirely static, with writable disk storage needed. We've profiled the application, and the main bottleneck appears to come from loading the application and not the work the application does. The application is not CPU-intensive, although it does use a fair amount of memory (think Magento). Currently we distribute it by having a series of servers with the same PHP files on their hard drive and a load balancer in front of them. Easy but expensive. I've been reading about RAM disks and the IO benefits they offer, and was wondering if they would be well-suited to PHP applications. Since PHP applications are loaded from disk for every request and often involve lots of different files (as opposed to being kept in memory like with a Java application), I would figure that disk performance can be a severe bottleneck. Would placing the PHP files on a RAM disk and using the mount point as Apache's document root offer performance benefits? A startup script could create the RAM drive and then copy the files (which are plain-text and small) from a permanent location to the temporary RAM drive. Does this make sense, or should I just trust the linux kernel to cache the appropriate files in memory by itself?

    Read the article

  • mysql - moving to a lower performance server, how small can I go?

    - by pedalpete
    I've been running a site for a few years now which really isn't growing in traffic, and I want to save some money on hosting, but keep it going for the loyal users of the site and api. The database has one a nearly 4 million row table, and on a 4gb dual xeon 5320 server. When I check server stats on this server with ps -aux, i get returns of mysql running at about 11% capacity, so no serious load. The main query against mysql runs in about 0.45 seconds. I popped over to linode.com to see what kind of performance I could get out of one of their tiny boxes, and their 360mb ram XEN vps returns the same query in 20 seconds. Clearly not good enough. I've looked at the mysql variables, and they are both very similar (I've included the show variables output below, if anybody is interested). Is there a good way to decide on what size server is needed based on what I'm coming from? Is it RAM that is likely making the difference with the large table size? Is there a way for me to figure out how much ram would be ideal?? Here's the output of the show variables (though I'm not sure it is important). +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + - For some reason, that table formats properly in the preview, but apparently not when viewing the question. Hopefully it isn't needed anyway.

    Read the article

  • WIN7 constant BSOD 0x7B on boot, not producing any dump files where to go from here?

    - by prayingpantis
    So my one win 7 pc has been getting a BSOD on boot (roughly a sec after load screen starts) after a power failure. The complete stop code is 0x0000007B (0x80786B58, 0xC0000034,0x00000000,0x00000000) I've searched for quite a while now on the net and it seems like most people gave up after gettting 0x7B and no dump files. What I've tried so far: startup repair - reports it cannot repair computer automatically. BadPatch is reported somewhere in a problem signature contained in the problem details. startup repair with a WIN 7 CD - also fails, I can't recall what the error was, but it was not the same as the error produced with the start up tool shipped with the version of WIN 7 installed on my machine (I think the text had something ACL-ish contained in it) used a boot disk (Hiren's boot iso) - I used it to enable the CrashDump registry key and then after BSOD, read the HDD's dump locations but it was empty. Note, I'm quite sure the registry keys I edited are the correct ones, since the reboot on BSOD option was enabled by default and after I changed the regkey controlling this functionalitty to 0 the BSOD stayed after I booted again. check disk - works and returns no problems, also it seems I'm able to access all my files on the HDD. mem test - works and returns no errors So I'm not sure what else I can do to figure out what is the problem here. I read somewhere that you can use WINDBG to remote debug another PC, but I'm not sure if this is possible since the OS isn't even loaded yet? Also the last driver change I made on the system was installing a video driver, but I had no problems with it and were able to reboot several times until the power outage happened and the BSOD appeared. Any help or guidance for a way to DEBUG this problem would really be appreciated (I'm not really that keen to try a whole bunch of random fixes, I'd rather try and narrow down the problem first).

    Read the article

  • Does anyone know where I could find a 2 input USB voltage meter?

    - by John O
    What we really need is a tiny UPS, of sorts. We'll be hooking up a solar cell and a battery to a single board computer. Currently, that SBC is a custom Pic32 device, and it does it's own UPS and voltage monitoring duties. I've been tasked with trying to replicate all of its features with off the shelf products... and for the most part I've succeeded. But I don't currently have any way to switch between two sources of juice, or monitor when they're getting low. These guys have something: http://www.mini-box.com/picoUPS-100-12V-DC-micro-UPS-system-battery-backup-system I really like it, the price is well within the budget. We might even work it in though it does 12V and I'll probably be using 5V... there are enough engineers on hand to figure out something. But I'd still have no idea what the voltage was for the PV or battery. I was hoping that there was some simple little USB multimeter thing that I could use to monitor this with, but I can't seem to come up with anything. I've found all sorts of cool hardware, but nothing that will help us. Does anyone know of anything?

    Read the article

  • Is there a way to do a sector level copy/clone from one hard drive to another?

    - by irrational John
    Without going into distracting details, I'm attempting to duplicate the contents of the 500GB drive in my MacBook to another 500GB drive. But this is turning out to be an unexpected hassle because the drive contains both the OS X partition and an NTFS partition with Win 7 via Apple's Boot Camp. With the exception of Clonezilla, the tools I have looked at so far all have some limitation. The Mac tools don't want to deal with the NTFS partition. The Windows tools are totally clueless about either the HFS+ partition and/or the hybrid MBR/GPT Boot Camp partitioning. Clonezilla looked like it would do what I want but apparently I can't figure out how to use it. After doing what I thought was a sector to sector copy I found that only the NTFS partition had been migrated. The others were apparently empty. (And frankly, I'm not positive Clonezilla migrated the partition table correctly either). Note: It takes over 2 hours using SATA to read/write all sectors with these drives. So I'm not up for using trial & error to narrow in on the right combination of Clonezilla options to use. I'm beginning to think that maybe the answer is to boot Linux (probably Ubuntu) and then use some ancient BSD command. Trouble is I don't know what command (or parameters to use) in order to do a sector level copy from one drive to another. As far as I know the drives have the same number of sectors so this should be trivial. Sigh.

    Read the article

  • Azure VM won't boot after sysprep; integration tools installed

    - by Mark Williams
    I have installed the Azure Integration Components and used sysprep on a Windows 2012 VM. Now the machine won't start up. I uploaded the VHD to Azure - it failed there too. When I start up the VM I get a PowerShell window that hangs out for a bit; eventually I get the following error, after which the machine restarts. New-Object: The dependency service or group failed to start. (Exception from HRESULT: 0x8007042C) At line1: char:1 New-Object -comobject WaAgent.WindowsSetupComponent | % { $_.HandleSetupError() ... ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +CategoryInfo : ResourceUnavailable (:) [New-Object], COMException +FullyQualifiedErrorId: NoCOMClassIdentified,Microsoft.PowerShell.Commands.NewObjectCommand I have tried renaming unattended.xml and turning on bootlogging. Neither of those yielded much help. Is there a way I can disable the Azure components that run during OOBE? That seems to be the source of the problem. Mounting the VHD is easy. 0x8007042C looks like a firewall issue, based on my googling. Unfortunately I can't get the machine to boot so I can figure that issue out. Also, I can't get around this problem by booting into safe mode. Thanks for your help, guys.

    Read the article

  • Outlook 2010 IMAP account - send on behalf

    - by Master of Celebration
    So I was looking for a possibility to manage the mail distribution of online shops, newsfeeds, etc. and have a nice solution via distribution groups aka. alias addresses. In example, I register an account on eBay using "[email protected]" (where org.com is my company obviously). That address is an alias and can be managed on my on-premise mail server setting destination to somebody's mailbox independent from logging on to eBay - in case somebody else shall do the eBay-stuff, I can quick change the destination of that alias :-) So far, so good - and now to the problem: Using Microsoft Outlook 2010 and an IMAP account on our mail server, I cannot figure out how to remove that "on behalf of"-string visible in the from-field when sending a message under that [email protected] address. That's quite a pity, because especially eBay doesn't accept/forward mails not coming from the registered address.. Using other mail clients (e.g. Mozilla Thunderbird), the problem does not occur so I guess it's Outlook specific. I cannot "grant" permission to "send as", because that address is not a mailbox, but rather an alias only. Furthermore, the mail accounts are not Exchange, but IMAP! Does anybody have any other ideas to "remove" that annoying string? Consideration: We have to use Microsoft Outlook for some reason! :-)

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • Proper approach to debug PC startup problems (POST)

    - by saurabhj
    My CPU was heating up to around 65 deg C and last time this had happened (about a year ago), I got thermal paste put between the CPU and heat sink and this managed to get it down to about 45 - 50 degrees. This time, I got some thermal paste and put it myself. However, my PC is not showing the POST display and not starting up. This is what happens LEDs light up HDDs spin Mouse is getting power All fans including the processor fan starts No display on monitor No diagnostic beep sounds (no sounds at all) What I have tried Removing everything including RAM, HDD, PCI cards, AGP card Boot up machine No changes from first state. What steps can I take to figure out where the problem lies? Note (might be important) When I removed the heat sink, the processor came out with it (it was stuck to it inspite of the processor latch on) Had to pry it separate with a screw-driver. Configuration Pentium 4, 2.8 Ghz with HT (very old, I know) Original Intel Mobo with onboard sound and graphics (GB series) 2x512 Mb DDR-RAM 2 SATA disks (320 Gigs / 250 gigs) DVD Writer Creative Sound Card Network card Any help would be appreciated. Thanks!

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • Undo Google Sync in chrome

    - by iamcreasy
    I didn't know that my google account wasn't in sync with my chrome for the last couple of months and now that I have link again, the restored record is several months old. Now, that I've lost all my recent bookmarks and all other stuff...is there anything or anyway so I could revert the Google sync so I can get my bookmarks back? Update 1 I have found that under C:\Users\Profile_Name\AppData\Local\Google\Chrome\User Data\Default there is a file named Bookmarks.bak that holds the old state of my bookmarks before the sync. Update 2 Bookmarks is the file that holds the current(after sync) bookmark list. I replaced Bookmarks with Bookmarks.bak and restarted chrome, but still chrome isn't fetching information from the updated file. So, I have my old bookmark information, but how to restore it in chrome. Update 3 : solved I still couldn't figure out why replacing the bookmarks file didn't work and aparently that's the only solution available on web. I reinstalled everything and then copied the old bookmarks file. Then I got my bookmarks back again. Lession learned : Check regularly if google sync is working.

    Read the article

  • How to fix high Load_Cycle_Count laptop drive (TOSHIBA MK6006GAH in Vaio TX1XP)?

    - by Sam Brightman
    Hoping someone knows exactly what's going on here. It seems this drive has some combination of aggressive power saving settings and Ubuntu defaults that has massively increased the Load_Cycle_Count for the drive: https://wiki.ubuntu.com/DanielHahler/Bug59695 So the drive is now so slow that it cannot boot because it takes long enough to access the data that the kernel will not recognise it properly. I'm not worried about the data on the drive, but would really like to keep the laptop functioning. There is some indication that this is possible because the figure is still low 200,000s and most drives supposedly go to 600,000. Additionally, SMART tests pass and consider the drive healthy and without errors. But the really surprising thing was when I ran mhdd... Every single read came up red (slow) until I pressed 'R' for reset drive. I noticed the next read was normal speed, so held down 'R'. Magically the drive read perfectly for as long as I held the key BUT resumed slow (and noisy) seeking/reading after releasing. I don't think the source code to mhdd is available, so I'm not exactly sure what this means (besides, I don't know enough low-level HDD stuff either). It seems like the drive should be able to work, but is stuck trying to power save or something. There are no BIOS options on the laptop. Does anyone know how I can stop the drive from doing extremely slow/noisy operations like this? Or is constantly resetting the drive also damanging, and only causing it to work well by luck (i.e. not a suggestion that it's fixable)?

    Read the article

  • Why Are SPF Records Failing?

    - by robobobobo
    Ok I've been going through various different sites, resources and topics here trying to figure out what is wrong with my SPF records but no matter what I do they don't seem to pass. Here's what I have "v=spf1 +a +mx +ip4:217.78.0.92 +ip4:217.78.0.95 -all" I've tried multiple different tools to check my spf records, some give me a pass, some don't. But I can't send mail to certain google app accounts, they just bounce back all the time which is very annoying. Anyone got any ideas? I have noticed that the source IP address is not the IPV4 addresses I've defined, but Cpanel wouldn't let me add that address into it.. And here's the result of tests I'm getting back from port25.com. I'm running WHM by the way and have enabled spf and dkim. Summary of Results SPF check: fail DomainKeys check: neutral DKIM check: pass Sender-ID check: fail SpamAssassin check: ham Details: HELO hostname: server1.viralbamboo.com Source IP: 2a01:258:f000:6:216:3eff:fe87:9379 mail-from: ###@viralbamboo.com SPF check details: Result: fail (not permitted) ID(s) verified: smtp.mailfrom=###@viralbamboo.com DNS record(s): viralbamboo.com. SPF (no records) viralbamboo.com. 13180 IN TXT "v=spf1 +a +mx +ip4:217.78.0.92 +ip4:217.78.0.95 -all" viralbamboo.com. AAAA (no records) viralbamboo.com. 13180 IN MX 0 viralbamboo.com. viralbamboo.com. AAAA (no records) DomainKeys check details: Result: neutral (message not signed) ID(s) verified: header.From=###@viralbamboo.com DNS record(s): DKIM check details: Result: pass (matches From: ###@viralbamboo.com). ID(s) verified: header.d=viralbamboo.com Canonicalized Headers: content-type:multipart/alternative;'20'boundary="4783D1BE-5685-41CF-B91B-1F15E91DD1E3"'0D''0A' date:Mon,'20'1'20'Jul'20'2013'20'21:30:47'20'+0000'0D''0A' subject:=?utf-8?Q?test?='0D''0A' to:"[email protected]?="'20''0D''0A' from:=?utf-8?Q?Rob_Boland_-_Viralbamboo?='20'<###@viralbamboo.com'0D''0A' mime-version:1.0'0D''0A' dkim-signature:v=1;'20'a=rsa-sha256;'20'q=dns/txt;'20'c=relaxed/relaxed;'20'd=viralbamboo.com;'20's=default;'20'h=Content-Type:Date:Subject:To:From:MIME-Version;'20'bh=CJMO7HYeyNVGvxttf/JspIMoLUiWNE6nlQUg5WjTGZQ=;'20'b=;

    Read the article

< Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >