Search Results

Search found 21142 results on 846 pages for 'bit manipulation'.

Page 616/846 | < Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >

  • nginx with stub_status.. need help with nginx.conf

    - by Amar
    Hello I am trying to setup nginx with stub status so I can monitor nginx requests etc.. with serverdensity.com. I needed to put something like this in nginx.conf server { listen 82.113.147.xxx; location /nginx_status { stub_status on; access_log off; allow 82.113.147.xxx; deny all; } } And with this monitoring acctualy works. However It seems I lost "include" part in my nginx.conf and now none of vhosts in sites-enabled work. Here is a bit more of my nginx.conf http { include /etc/nginx/mime.types; default_type application/octet-stream; server_tokens off; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; server { listen 82.113.147.226; location /nginx_status { stub_status on; access_log off; allow 82.113.147.226; deny all; } } } Hope someone can help me with this , as I belive its minor issue, its just that "I dont see it" ty

    Read the article

  • Using the RST3 plugin in the Leo Outliner

    - by T-Boy
    I'm currently trying out the Leo Outliner, and I've heard quite a bit about the RST3 plugin that it has. I'm not planning to use Leo to program just yet -- at this point I'm wondering if it might be useful for generating HTML and PDF documents, as I'm quite currently enamored with RST and how it works. I'm using my Ubuntu Netbook Remix netbook (running 9.10, I believe). I think I've got it down pat, more or less -- I've installed docutils using the Synaptics Package Manager, and I think I've gotten SilverCity installed, as per the requirements -- I've downloaded the archive, and then run "sudo python setup.py install" with no errors. The thing is, I'm not exactly sure how to invoke the rst3 plugin itself. It doesn't appear in the Plugins menu for Leo right now, and the documentation I've managed to source doesn't seem to clearly explain how to use the plugin. Has anyone had any experience in using the rst3 plugin in Leo? It's a little confusing right now, and searches on Google doesn't seem to be helping much any more. I'm using the latest 4.7.1 final version of Leo from the Synaptics Package Manager (was informed that this would have offered the best integration with UNR, so I figured, what the heck). Have I missed out on any steps here, and are there any useful tutorials on how to use the rst3 plugin?

    Read the article

  • Adding tables to a herd in bucardo

    - by Joseph the Dreamer
    Forgive my ignorance, I am a JS programmer given the task to do DB replication using bucardo. I understand the concept of how bucardo works, but setting it up is a bit confusing. The set-up is: Lubuntu Linux Two databases test_master and test_slave, using PostgreSQL Each DB has a table named test, containing 2 columns: id (PK) and test (int) I use pgAdmin3 I have already added them to bucardo's list of databases and added all tables. Table: public.test DB: test_slave PK: id (int4) Table: public.test DB: test_master PK: id (int4) As you see, due to the fact that the DBs are identical, even the schema names are identical. So when I do: bucardo_ctl add herd sample_herd public.test Ok, so it got added to the herd. But this command gets confused which database public.test comes from. So when I add a sync: $ bucardo_ctl add sync sample_sync source=sample_herd targetdb=test_slave type=fullcopy Failed to add sync: DBD::Pg::st execute failed: ERROR: Source and target databases cannot be the same: test_slave at line 118. at line 30. CONTEXT: PL/Perl function "validate_sync" at /usr/bin/bucardo_ctl line 3362. What does it mean that source and target cannot be the same? If it got confused as to which public.test to use as source, how do I differentiate?

    Read the article

  • Crash dump analysis

    - by Ryan Ries
    I hope this isn't a stupid question, and if it is, then I want to at least get it over with so I don't feel so dumb in the future. Here we are, loading up a Windows crash dump with Windbg. Here are the first few lines of the debugger output: 0: kd> .dumpdebug ----- 64 bit Kernel Summary Dump Analysis DUMP_HEADER64: MajorVersion 0000000f MinorVersion 00001db1 ... The MinorVersion I mostly understand. It's hexadecimal and it translates to 7601 in decimal. Windows admins would already be able to tell from that that this must be either a Win7 x64 machine or a 2k8 R2 machine with SP1. But isn't 7601 the build number? It's supposed to be Major.Minor.Build/Revision... right? Also I don't understand the MajorVersion. It should be 6. This version of Windows is 6. But isn't 0000000f in hexadecimal 15 in decimal? The full version string of this version of Windows, when you launch the Command Prompt for instance, is 6.1.7601. If 7601 is the MinorVersion, then what is 1 and what is 6? And why does the crash dump say 0F?

    Read the article

  • Performance of file operations on thousands of files on NTFS vs HFS, ext3, others

    - by peterjmag
    [Crossposted from my Ask HN post. Feel free to close it if the question's too broad for superuser.] This is something I've been curious about for years, but I've never found any good discussions on the topic. Of course, my Google-fu might just be failing me... I often deal with projects involving thousands of relatively small files. This means that I'm frequently performing operations on all of those files or a large subset of them—copying the project folder elsewhere, deleting a bunch of temporary files, etc. Of all the machines I've worked on over the years, I've noticed that NTFS handles these tasks consistently slower than HFS on a Mac or ext3/ext4 on a Linux box. However, as far as I can tell, the raw throughput isn't actually slower on NTFS (at least not significantly), but the delay between each individual file is just a tiny bit longer. That little delay really adds up for thousands of files. (Side note: From what I've read, this is one of the reasons git is such a pain on Windows, since it relies so heavily on the file system for its object database.) Granted, my evidence is merely anecdotal—I don't currently have any real performance numbers, but it's something that I'd love to test further (perhaps with a Mac dual-booting into Windows). Still, my geekiness insists that someone out there already has. Can anyone explain this, or perhaps point me in the right direction to research it further myself?

    Read the article

  • IPMI not fucntioning with Network Bonding

    - by muhammed sameer
    Hey, I am having problems with running IPMI on my servers that have network bonding enabled. Platform: CentOS release 5.3 (Final) Kernel: 2.6.18-92.el5 64bit Dell PowerEdge 1950 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet I have bonded the interface eth0 and eth1 as active passive, with eth0 as the active interface, below is conf description from /proc Bonding Mode: fault-tolerance (active-backup) Primary Slave: eth0 Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 30 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cd Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cf My IPMI device is as follows IPMI Device Information Interface Type: KCS (Keyboard Control Style) Specification Version: 2.0 I2C Slave Address: 0x10 NV Storage Device: Not Present Base Address: 0x0000000000000CA8 (I/O) Register Spacing: 32-bit Boundaries I Have used openIPMI as well as freeipmi both to control the chassis via the IPMI card, but on servers which have bonding enabled, the command times out, below is the full run of the command with debug info. ipmi_lan_send_cmd:opened=[0], open=[4482848] IPMI LAN host 70.87.28.115 port 623 Sending IPMI/RMCP presence ping packet ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed Error: Unable to establish LAN session Failed to open LAN interface Unable to get Chassis Power Status On the other hand I configured IPMI on a box with the same specs as mentioned above without bonding and IPMI works perfectly. Has anyone faced this problem with IPMI + Bonding ? I would be thankful is someone helps circumvent this issue. Muhammed Sameer

    Read the article

  • How to replicate Lynda.com transitions in my own presentations (Keynote preferred, Powerpoint possib

    - by jdmuys
    I recently noticed that lynda.com uses in their training videos a very interesting transition that I would like to replicate in my own presentations. They are somewhat similar to the computer screen animations in the "Minority Report" movie: they zoom out to a huge sheet, pan, and zoom back in to a new location. It's very good for navigating deep hierarchies. For example when you are at level 1, you can see level 3, but it's tiny and unreadable. You then transition by zooming in to a deeper level that suddenly becomes legible. I can imagine also to use it using a large "mind map" as a common Ariane thread, to dive into details (and back out) while still taking advantage of the visual memory of the audience. It a bit difficult to describe, I hope I managed to convey the idea. You can see them in action for example there (iPhone programming training): http://www.lynda.com/home/DisplayCourse.aspx?lpk2=48369 Then click on "What you should know" in the introduction chapter. See for example in that movie between 22s and 24s, then between 27s and 28s. How do they do that? How could I replicate that effect? TIA

    Read the article

  • chmod -R 777 / on ubuntu - numerous problems

    - by ncatnow
    A client has accidentally given the entire filesystem full permissions on their ubuntu 10.04 box. chmod -R 777 httpdocs/cd / As you can see they attempted to cd to the root, and instead gave chmod a fun parameter to play with. First sign of the problem was inability to use 'su', giving an authentication error. sudo also complained of a missing setuid bit. This was fixed by logging in as root from the machine itself, and running chmod +s /usr/bin/sudo. I can now sudo su and do what I need to as root. su still gives an authentication failure. I followed the advice here: http://swiss.ubuntuforums.org/showthread.php?t=1180661&page=2 chmod 0755 / chmod 0755 /* chmod 1777 /tmp chmod 0750 /root chmod 0700 /lost+found I then tried to reset root password. I still cannot su to become root, or su root. The system seems to be running fine. Are there any suggestions for getting su to work once again? Where can I look for more problems?

    Read the article

  • USB Hardware vs. Software Write Lock

    - by TreyK
    I'm in the market for a USB flash drive, and remember this cool feature a tiny 32MB flash drive of mine had: a write lock switch. This seemed like it would be an amazing feature to have as a shield against any nastiness happening to the drive on an unfamiliar computer. However, very few drives on the market offer this feature. Instead, it seems that forms of software protection are the more prominent method. This software protection causes me a bit of uneasiness, as it seems like this software wouldn't be nearly as bulletproof as a physical switch. Also, levels of protection seem to vary from product to product. Being able to protect certain folders from reading and/or writing would be nice, but is the security trade-off worth it? Just how effective can this software protection be? Wouldn't a simple format be able to clean any drive with software protection? My drive must also be compatible with Windows XP, Vista, and 7, as well as Linux and Mac. What would be the best way forward for getting a well-sized (~8GB) flash drive with a strong write protection implementation, for little or no more than a regular drive? Thanks.

    Read the article

  • SQL Server 2008: Can't connect to remote server via management studio but can telnet in fine

    - by WarpKid
    Hi, I am in the process of trying to configure SQL Server 2008 to accept remote connections. I have been through all the documentation I can find and yet when I attempt to connect through management studio I get an error stating that the server could not be found. Interestingly I can connect through telnet to the remote server via the port that sql server is listening on. In the SQL Server logs I can see the connection attempt. So SQL Server is up and running and listening on the correct port - no firewall blocking it. It would appear that by default SQL Server is listening on port 50314 by default but management studio attempts to connect on port 1433.Weird. Server Management Studio = no dice. Anyone got any ideas? Server is set to allow remote connections - TCP IP is enabled, firewall is off. Thanks UPDATE FOR TO CLEAR THINGS UP A BIT We are seeing the connection attempt when we telnet in on port 50314 in the sql server logs. When we login through management studio we see it attempting connection on port 1433. There is no sign of this connection attempt in the logs.

    Read the article

  • ZFS + FreeBSD + virtualbox

    - by John
    Hi, I'm configuring a FreeBSD server hosting virtualbox serving half dozen mission critical busy mail servers. I just learned ZFS, I'm quite attracted, but have a few questions: what is the CPU overhead of ZFS? I googled and found little (or no) benchmark for that. from what I learned, when ZFS updates files, it keeps the old file as snapshot, and write the updated part for the new version. However that would mean for each snapshot it keeps that require significant storage overhead. How much is this storage overhead? For example, suppose I have 2TB usable space, how much space can actually be used for the latest version of files one year later? is FreeBSD with ZFS hosting virtualbox serving half dozen busy guest mission critical mail servers a reasonable combination? Anything particular to be careful with? And can I still choose ZFS for the guest OSs? This is because I may build another identical such box for redundancy, and will need to do some mirroring between each pair of the guest systems across the boxes. I'm trying to configure a Dell R710 for this. From what I learned, I shouldn't choose any RAID at all, is that true? In that case, are the drives still arrive hot swappable? this may sounds a bit pathetic, but since I have no experience with ZFS at all, and this is a mission critical server, so just ask just in case: I'm choosing twin Intel L5630 processors, and 6 x 600GB 15K RPM Serial-Attach SCSI drives. If I need more space in the future, I would just hot swap some drivers with larger capacity to expand the storage. There is no problem with these, right?

    Read the article

  • Wildcard DNS, VirtualHosts on apache2, 404 for unused subdomains

    - by niel
    On an Apache2 server linked to by a DNS that includes a wildcard entry, e.g. *.example.com, subdomains that are not defined as ServerNames in any VirtualHosts point to the first defined VirtualHost, in my example this is 000-default. My Question:How would one get unused subdomains (subdomains not used in any virtualhosts) to return a 404 error to the requesting client? This must preferably show in server logs as a 404 as well. I have looked into the following possibilities: Redirecting any invalid subdomain to the home page or some other page.The problem with this method is, when someone links to your site as this.company.sucks.example.com, the client will see your home page or in my case 000-default if I do not redirect. Thanks, to Mike for pointing this out. (regex for "suck", etc definately not an option) Let the default VirtualHost point to a non-existent directory.Apache does not like this one bit, warning with every reload. Beyond the warning, everything seems fine. This seems like a hack. Does this seem like a problem (however small) to anyone? Point the default VirtualHost to a folder where the index.php is forbidden, thus creating a 403 status code.This is confusing and makes things like the following overly complicated: Say, for example, you use a subdomain per user (a big reason to use wildcard DNS, apparently), and users have the ability to view each others profiles at username.example.com. This solution is confusing to the user and completely not what I want to do. My ideal sollution will let the user know there is nothing to view at the url he entered. Preferably with a 404 and an error log entry for the address entered (not some other address). Any help would be greatly appreciated!

    Read the article

  • Properly force SSL with .htaccess, no double authentication

    - by cwd
    I'm trying to force SSL with .htaccess on a shared host. This means there I only have access to .htaccess and not the vhosts config. I know you can put a rule in the VirtualHost config file to force SSL which will be picked up there (and acted upon first), preventing double authentication, but I can't get to that. Here's the progress I've made: Config 1 This works pretty well but it does force double authentication if you visit http://site.com - once for http and then once for https. Once you are logged in, it automatically redirects http://site.com/page1.html to the https coutnerpart just fine: RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteEngine on RewriteCond %{HTTP_HOST} !(^www\.site\.com*)$ RewriteRule (.*) https://www.site.com$1 [R=301,L] AuthName "Locked" AuthUserFile "/home/.htpasswd" AuthType Basic require valid-user Config 2 If I add this to the top of the file, it works a lot better in that it will switch to SSL before prompting for the password: SSLOptions +StrictRequire SSLRequireSSL SSLRequire %{HTTP_HOST} eq "site.com" ErrorDocument 403 https://site.com It's clever how it will use the SSLRequireSSL option and the ErrorDocument403 to redirect to the secure version of the site. My only complaint is that if you try and access http://site.com/page1.html it will redirect to https://site.com/ So it is forcing SSL without a double-login, but it is not properly forwarding non-SSL resources to their SSL counterparts. Regarding the first config, Insyte mentioned "using mod_rewrite to perform a simple redirect is a bit of overkill. Use the Redirect directive instead. It's possible this may even fix your problem, as I believe mod_rewrite rules are some of the last directives to be processed, just before the file is actually grabbed from the filesystem" I have not had no such luck on finding a force-ssl config option with the redirect directive and so have been unable to test this theory.

    Read the article

  • RAID 5 - DELL 2850 and others

    - by Kiara
    I have installed Ubuntu on a DELL 2850 and I have configured an array of 5 disks (SCSI 73GB 10K) in RAID5. I wanted to simulate a drive error so in the middle of something I just took one of the drives out and put it back again after a bit. Then the drive shows an orange light and seems to be rebuilding but actually is taking hours and hours with no results. So I went to the PERC utility (Ctrl+M) and the disk shows "REBLD". But it never gets to an online state. So I went to Objects - Physical drives - Rebuilding - View rebuild process. And in here I can see a bar moving from 0%... but if I reboot before finishing and get into the PERC Utility again, it seems to start again rebuilding from 0% - so it is not rebuilding automatically. My concern is: what would happen in a real situation? Do I have to just switch the server off and go to the Perc utility to start the rebuilding manually? I thought the whole point was to have this done automatically and without the need of stopping the server. Or does it perhaps rebuild automatically indeed but needs to have enough time without rebooting because otherwise the rebuilding process will start from scratch? It seems to take more than 3h for a 73gb disk! My second question is: can I mix then hard drives? So if I have a RAID of 5x73GB 10K can I use different size (146GB) or speeds (15K)? Apparently someone said it is OK in here Poweredge 2850: replace disk with larger in RAID?

    Read the article

  • Excel VBA: select every other cell in a row range to be copied and pasted vertically

    - by terry alexander
    i have a 2200+ page text file. It is delivered from a customer through a data exchange to us with astericks to separate values and tildes (~) to denote the end of a row. The file is sent to me as a text file in Word. Most rows are split in two (1 row covers a full line and part of a second line). i transfer segments (10 page chunks) of it at a time into Excel where, unfortunately, any zeroes that occur at the end of a row get discarded in the "text to columns" procedure. So, i eyeball every "long" row to insure that zeroes were not lost and manually re-enter any that were. Here is a small bit of sample data: SDQ EA 92 1551 378 1601 151 1603 157 1604 83 The "SDQ, EA, and 92" are irrelevant (artifacts of data transmission). i want to use Excel VBA to select 1551, 1601, 1603, and 1604 (these are store numbers) so that i can copy those values and transpose paste them vertically. i will then go back and copy 378, 151, 157, and 83 (sales values) so that i can transpose paste them next to the store numbers. The next two rows of data contain the same store numbers but give the corresponding dollar values. i will only need to copy the dollar values so they can be transpose pasted vertically next to unit values (e.g. 378, 151, 157, and 83). Just being able to put my cursor on the first cell of interest in the row and run a macro to copy every other cell would speed up my work tremendously. i have tried using activecell and offset references to select a range to copy but have not been successful. Does any have any suggestions for me? Thanks in advance for the help.

    Read the article

  • 3GB RAM Installed and Detected by BIOS, Windows Vista 32bit Only Sees 2GB

    - by Nathan Taylor
    I am attempting to install more RAM on a Windows Vista 32bit machine which is using a X6DAL-XG motherboard and the RAM amount reported in the BIOS is 3GB+, but Windows is only reporting 2GB installed. The motherboard has 6 RAM bays which I have populated with various combinations of 4 1GB sticks, and 2 512mb sticks, but no matter how I configure them Windows doesn't see more than 2GB. I realize of course 32-bit Windows has a 3gb cap on memory, but that doesn't explain why it will only report 2GB when there are in fact (currently) 5GB installed. I should think I would be able to see at least 3GB. According to the spec list for the motherboard the minimum RAM requirements are DDR333/266mhz installed in pairs. I have done this exactly, and the BIOS isn't reporting any problems at POST. RAM Configuration (according to CPU-Z): Slot #1: Kingston 128mx72D266C25 - 1024mb PC2100 (133mhz) Slot #2: Kingston KVR266X72RC25/1024 - 1024mb PC2100 (133mhz) Slot #3: PQI - 512mb PC2700 (166mhz) Slot #4: Kingston 128mx72D266C25 - 1024mb PC2100 (133mhz) Slot #5: Kingston KVR266X72RC25/1024 - 1024mb PC2100 (133mhz) Slot #6: PQI - 512mb PC2700 (166mhz) I'm not sure if memory specs above conflict with this statement in the motherboard manual or not: Memory Support The X6DAL-XG supports up to 12GB/24GB of registered ECC DDR333/266 (PC2700/PC2100) memory. The motherboard was designed to support 4GB (PC2100) modules in each slot, but only the 2GB modules have been tested. When using registered ECC DDR333 (PC2700) memory, installing four pieces of double-banked memory or six pieces of single-banked memory is supported. So, am I doing something wrong with the RAM I have now, or is there some sort of compatibility problem which I am missing? Thanks!

    Read the article

  • can't find port 22 traffic under VirtualBox

    - by telliott99
    I'm trying to learn to use tcpdump. I thought I'd eavesdrop on my ssh login. The setup is a bit unusual, I have OS X Lion running VirtualBox, with Ubuntu running in the VM. I have ssh enabled and can login from OS X normally: > ssh -p 22 10.0.1.2 -l telliott Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-17-generic i686) * Documentation: https://help.ubuntu.com/ 0 packages can be updated. 0 updates are security updates. Last login: Sat Mar 31 19:54:36 2012 from toms-mac-mini.local telliott@U32:~$ logout Connection to 10.0.1.2 closed. > I have not obfuscated the ssh port on Ubuntu. From OS X, stroke gives what I expect: > ./stroke 10.0.1.2 22 22 Port Scanning host: 10.0.1.2 Open TCP Port: 22 ssh So from OS X I do: > sudo tcpdump -i en1 -v port 22 Password: tcpdump: listening on en1, link-type EN10MB (Ethernet), capture size 65535 bytes Then I login from OS X to Ubuntu using ssh, but I see nothing with tcpdump. Here is ifconfig from Ubuntu: telliott@U32:~$ ifconfig eth1 Link encap:Ethernet HWaddr 08:00:27:d7:ba:0e inet addr:10.0.1.2 Bcast:10.0.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fed7:ba0e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:799 errors:0 dropped:0 overruns:0 frame:0 TX packets:465 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:96863 (96.8 KB) TX bytes:68638 (68.6 KB) Where are the packets I was hoping to see? Thanks for any help.

    Read the article

  • Triple boot WIndows 7, Windows 8, and Mountain Lion on Macbook Pro

    - by Nathan
    Ok, So I have a bit of a unique situation here I could use some help on. I've modded my summer 2011 MBPro to have 2 harddrives by replacing the optical drive. OSX Mountain Lion is installed on a single partition of a 120GB SSD. The second drive is 750GB, partitioned as 550GB, 150GB, and ~50GB. I've set the 550GB to act as my OSX homefolder, but I'd like to install windows 7 and Windows 8 on the remaining partitions. It Took a while, but by following this guide, I eventually found a way to install Windows without a CD/DVD drive by following this http://huguesval.com/blog/2012/02/installing-windows-7-on-a-mac-without-superdrive-with-virtualbox/ It worked flawlessly for creating both windows 7 and windows 8 images that I could clone onto FAT32 partitions. However, I have encountered a problem when trying to triple boot. After I put Windows 8 onto the ~50GB partition and tried to boot into windows 7 I get an error that says something like: error: 0x0000000e The Boot selection failed because the required device is inaccessible. If I re-clone the windows 7 image onto the drive and select the option to "replace BCD" file for the drive, windows 7 will boot but windows 8 now gives me the same exact error. I realize this is a pretty extensive setup, but if anyone has some insight I'd love to hear it.

    Read the article

  • Can I change the "From" name and address without going into Mail.app's preferences?

    - by Arjan van Bentem
    Can I somehow add an editable "From" field to Mail.app, a bit like Virtual Identity for Thunderbird, just like I could change the "Reply To" address on the fly? In Mail.app, one can set up multiple email addresses for a single account by just entering a comma-seperated list like [email protected], [email protected]. Next, when composing a message, Mail offers a dropdown to select a "From" address. And when replying†, it automatically selects the right address if it can find a match: Nice, but I'd like to be able to change the "From" on the fly, without going into the Account Information. Also, in previous versions of Mail one could even specify multiple Full Names for a single account: Email Address: Arjan <[email protected]>, Arjan on SU <[email protected]> But nowadays, Mail only uses the setting of Full Name, and even ignores names in the Email Address field when no value for Full Name is set at all. Hence, it would be great if I could change the Full Name on the fly as well. I had no luck finding a plugin for Mail yet. † When using sub-addressing, anything that is sent to [email protected] is simply delivered to [email protected]. I'd then like to reply with the same full address, rather than just [email protected] if Mail cannot find a match, without going into the Account Information first. I sometimes also want to compose a new message with a new sub-addressing-address.

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

  • Snort/Barnyard2 Logging

    - by Eric
    I need some help with my Snort/Barnyard2 setup. My goal is to have Snort send unified2 logs to Barnyard2 and then have Barnyard2 send the data to other locations. Here is my currrent setup. OS Scientific Linux 6 Snort Version 2.9.2.3 Barnyard2 Version 2.1.9 Snort command snort -c /etc/snort/snort.conf -i eth2 & Barnyard2 command /usr/local/bin/barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.log -w /var/log/snort/barnyard.waldo & snort.conf output unified2: filename snort.log, limit 128 barnyard2.conf output alert_syslog: host=127.0.0.1 output database: log, mysql, user=snort dbname=snort password=password host=localhost With this setup, barnyard2 is showing all of the correct information in the database and I'm using BASE to view it on the web GUI. I was hoping to be able to send the full packet data to syslog with barnyard2 but after reading around, it seems that it is impossible to do that. So I then started trying to modify the snort.conf file and add lines like "output alert_full: alert.full". This definitely gave me a lot more information but still not the full packet data like I want. So my question is, is there anyway I can use barnyard2 to send the full packet data of alerts to a human readable file? Since I can't send it directly to syslog, I can create another process to take the data from that file and ship it off to another server. If not, what flags and/or snort.conf configuration would you recommend to get the most data possible but still be able to handle quite a bit of traffic? In the end of it all, these alerts will be shipped to a central server via a SSH tunnel. I'm trying to stay away from databases.

    Read the article

  • Advice on migrating from a Samba PDC

    - by pgb
    When we started our software development company, we decided to use Samba as a PDC for the few Windows workstations we had. We use Samba with OpenLDAP, and it has been a good replacement for AD for almost 6 years now (using Windows XP workstations). Now I'm facing a few problems with our setup: The Linux server where the PDC runs is very outdated (and is a Gentoo install, don't ask why!) We started using Windows 7 on some of the workstations, and these can't join the Samba domain (there's a workaround, I know) Our company has grown a bit, and we have now about 20 workstations (and plan to have more in the near future). I have to reinstall our PDC, and was thinking on updating to another Linux distro and the latest Samba 3.4. However, I started having second thoughts, and now I think going to a Windows Server for the PDC is the way to go. The main drivers to opt for a Windows Server would be its easy administration and the ability to use Windows 7 out of the box, without any registry hacks. My question(s) then is(are): How should I do this migration? Can I keep the same domain name? What will happen to the users? Will they be recreated and won't be identified by the workstations as being the same user, even if the actual username is the same? What steps would you recommend me to migrate from Samba to Windows Server? Bonus question: If you think staying in Samba is the way to go with my current setup, I'm also interested on your thoughts.

    Read the article

  • Recommended setting for using Apache mod_mono with a different user

    - by Korrupzion
    Hello, I'm setting up an ASP.net script in my linux machine using mod_mono. The script spawn procceses of a bin that belongs to another user, but the proccess is spawned by www-data because apache runs with that user, and i need to spawn the proccess with the user that owns the file. I tried setuid bit but it doesn't make any effect. I discovered that if I kill mod-mono-server2.exe and I run it with the user that I need, everything works right, but I want to know the proper way to do this, because after a while apache runs mod-mono-server2.exe as www-data again. Mono-Project webpage says: How can I Run mod-mono-server as a different user? Due to apache's design, there is no straightforward way to start processes from inside of a apache child as a specific user. Apache's SuExec wrapper is targeting CGI and is useless for modules. Mod_mono provides the MonoStartXSP option. You can set it to "False" and start mod-mono-server manually as the specific user. Some tinkering with the Unix socket's permissions might be necessary, unless MonoListenPort is used, which turns on TCP between mod_mono and mod-mono-server. Another (very risky) way: use a setuid 'root' wrapper for the mono executable, inspired by the sources of Apache's SuExec. I want to know how to use the setuid wrapper, because I tried adding the setuid to 'mono' bin and changing the owner to the user that I want, but that made mono crash. Or maybe a way to keep running mono-mod-server2.exe separated from apache without being closed (anyone has a script?) My environment: Debian Lenny 2.6.26-2-amd64 Mono 1.9.1 mod_mono from debian repository Dedicated server (root access and stuff) Using apache vhosts -I use mono for only that script Thanks!

    Read the article

  • Redhat 5.5: Multi-thread process only uses 1 CPU of the available 8

    - by Tonny
    Weird situation: Redhat Enterprise 5.5 (stock install, no updates, x64) on a HP z800 workstation. (Dual Xeon 2,2 Ghz. 8 cores, 16 if you count Hyper-threading. RH sees 16 cores.) We have an application that can utilize 1, 2 or 4 threads for heavy calculations. Somehow all these threads run on the same core at 100% load (the other 15 cores are nearly idle) so there is absolutely no benefit from the extra threads. In fact there is a slight slowdown as the threads get in each others way on the single core. How do I get them to run on separate cores (if possible)? Application is 64 bit. Can't change anything about the software except changing the threads setting. Is there some obscure Linux setting I can try to change? (I'm a True64 and Aix guy. I use Linux, but have no in depth knowledge of the process scheduling on Linux.)

    Read the article

  • Help building maya render node spec

    - by Ak
    Hi there, I'm looking to build 4x Maya render slaves/nodes for a friend of mine when his project gets green lit. The project involves MentalRay and lots of glass. I'm unsure if the new i7's 9xx or 8xx with hyper threading will do any better than a core 2 quad of the same (or close enough) speed. Does hyper threading make a difference to Maya or is it more performance per core based? I'm sure he's prefer I'd build another render node than pay for a bleeding edge CPU that only adds fractionly more GHz. -- The rest of the spec so far: 4Gb - 8Gb ram 64 bit OS: Probably Windows 7 (I know Linux is free, but want to build something my friend can support himself as easily as he supports his own workstation) 1TB HDD to hold textures, Maya files and renders which will be copied to central storage later Mobo with on-board video, gigabit NIC 500 - 650 watt PSU Desktop case something like a: Cooler Master ATCS 840 The machines will sold afterwards if necessary. -- If anyone has had experience in Maya and has done any tests with the new CPUs vs. the older ones I'd really appreciate your input.

    Read the article

< Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >