Search Results

Search found 8692 results on 348 pages for 'per magnusson'.

Page 259/348 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • Identifying Exchange 2010 regular process that is walking the mailbox database

    - by toongeneral
    I have an Exchange 2010 server running on a SAN-backed platform. The platform does block-level backups based on a snapshot/incremental basis, that only capture changed data. I was surprised to see a regular period of time where the data changes were happening at a high, sustained rate. Due to the way this system works, that can lead to 1.2TB of stored data per month. The regularity implied a scheduled task, but it is not a fixed interval. It is approximately every 26-32hrs. The disks were performing read operations of ~5MB/s and write operations of ~4.5MB/s, for a period of 3-4hrs. The total written data was ~55-60GB. Reading on TechNet, I am wondering if the following is causing this: http://blogs.technet.com/b/exchange/archive/2011/12/14/database-maintenance-in-exchange-2010.aspx#checksumming The somewhat restrictive thing is that the process only happens at most once every 24 hours. I was able to investigate while it was running, finding the following: the process is store.exe it is working on the mailbox database files while running, it is generating .log files (in the mailbox database folder) consistent with database changes the mailbox database is ~60GB in size, which fits with the total data changes on each iteration I have currently switched to a fixed maintenance window, as a test. It's not clear whether this is the cause, as the symptoms fit, but are not conclusive. Does anyone have any suggestions for additional troubleshooting?

    Read the article

  • Where can someone store >100GB of pictures online? [closed]

    - by sbi
    A person who is not very computer-savvy needs to store 130GB of photos. The key parameters are: an non-negligible probability that the company selling the storage will be existing, and the data accessible, for at least five years data should be considered safe once uploaded reasonable terms of service: google drive reserving the right to literally do anything they want with their user's data is not acceptable; the possibility that the CIA might look at those pictures is not considered a threat easy to use from Windows, preferably as a drive no nerve-wracking limitations ("cannot upload 10GB/day" or "files 500MB" etc.) that serve no purpose other than pushing the user to the next-higher price plan some upgrade plan: there's currently 10-30GB of new photos per year, with a tendency to increase, which might bust a 150GB limit next January ability to somehow sort the pictures: currently they are sorted into folders, but something alike (tags) would be just as good, if easy enough to apply of course, the pricing is important (although there's a reason this is the last bullet; reasonable data safety is considered more important) Nice to have, but not necessary features would be: additional features related to photos (thumbnail generation, album sharing etc.) access from web and other platforms than Windows (smart phones) Let me stress this again: The person in need of that is able to copy pictures from the camera to the computer, can copy files in the explorer, and uses a web email service. That's about it, there's almost no understanding of what happens under the hood.

    Read the article

  • How to make lighttpd respect X-Forwarded-Proto when constructing redirects for directories?

    - by Tim Landscheidt
    We have an nginx proxy at tools.wmflabs.org that receives requests by http and https and passes them by http on to lighttpds on a grid (one lighttpd per top-level path). Requests that reach the proxy by https are received by the lighttpds like this: HEAD /lighttpd-test/test HTTP/1.1 Connection: close Host: tools.wmflabs.org X-Forwarded-Proto: https X-Original-URI: /lighttpd-test/test User-Agent: curl/7.29.0 Accept: */* This works great except in the case where the URL references a physical directory and misses the trailing slash ("/"), as lighttpd then generates a redirect to the http URL: HTTP/1.1 301 Moved Permanently Location: http://tools.wmflabs.org/lighttpd-test/test/ Connection: close Date: Fri, 06 Jun 2014 14:50:29 GMT Server: lighttpd/1.4.28 The relevant parts of our lighttpd configurations are: server.modules = ( "mod_setenv", "mod_access", "mod_accesslog", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_cgi", ) server.port = $port [...] server.document-root = "$home/public_html" [...] server.follow-symlink = "enable" [...] server.stat-cache-engine = "fam" ssl.engine = "disable" alias.url = ( "/$tool" => "$home/public_html/" ) index-file.names = ( "index.php", "index.html", "index.htm" ) dir-listing.encoding = "utf-8" server.dir-listing = "disable" url.access-deny = ( "~", ".inc" ) [...] How can I make lighttpd respect X-Forwarded-Proto and use it when constructing redirects for directories? I'm aware that I could try to tackle this in nginx, but I'd prefer if I can fix it in lighttpd.

    Read the article

  • apache/httpd responds slower under EL6.1 than EL5.6 (centos)

    - by daniel
    I've read through other threads on performance differences between RHEL6 and RHEL5, but none seem a tight match to mine. My issue manifests itself in slightly slower average response time (20ms) per request. I have about 10/10 servers of the same hardware spec with Cent6.1 and Cent5.6. The issue is consistent across the group. I am running Ruby on Rails with Passenger. Apache config is identical (checked out from the same SVN repo) Ruby and Passenger are identical builds. Application is identical and being served traffic round robin. mod_worker An interesting clue from server-status: The Cent6.1 servers have a steady 20-40 threads in the "Reading Request" state while the Cent5.6 servers have around 1. I'm graphing this so I can see it trend over time. I also have a bunch of much newer machines that are significantly faster and are running Cent6.1. They dust all the older machines in response time, but I can see they also have a steady 20-40 threads in the "Reading Request" state. This makes me believe I can get their response time down, if I can figure out what is holding up these requests. My gut is telling me that I need to tune some network setting in sysctl, but I haven't figured it out yet. Help is appreciated.

    Read the article

  • .bat file - Nagios v3.2 service check and start if stopped

    - by LbakerIT
    I'm just barely getting into programming so I do apologize for my ignorance. I'm trying to create a .bat file that will check if a service is running on XP Pro. If service is running it will exit 0. If the service is stopped start service wait 10 seconds (via ping i'm guessing) check if service is running if service is running exit 0 if service is stopped start service wait 10 seconds Do this check a total of 3 times. if service does not come up within that time: exit 2 Exit 0 = ok exit 1 = warning exit 3 = critical (and this will alert) I need to do this for 3 different services but i'm expecting that it would be better to create one per service. That way you get notified on the specific service that is not coming back up. The goal is that if the service stops it will start it. If after 30 seconds it is unable to start the service then it will send an alert. The reason I'm trying to do it with a .bat is this is consistent with all other scripts and I did not want to complicate it further by adding different kinds of code. Yay for consistency! Again I do apologize for my ignorance I've been thrown into this project last minute. Thank you for the help and reading my question!

    Read the article

  • How can I get (g)Vim to display the character count of the current file?

    - by OwenP
    I like to write tutorials and articles for a programming forum I frequent. This forum has a character limit per post. I've used Notepad++ in the past to write posts and it keeps a live character count in the status bar. I'm starting to use gVim more and I really don't want to go back to Notepad++ at this point, but it is very useful to have this character count. If I go over the count, I usually end up pasting the post into Notepad++ so I can see when I've trimmed enough to get by the limit. I've seen suggestions that :set ruler would help, but this only gives the character count via the current column index on the current line. This would be great if I didn't use paragraph breaks, but I'm sure you'd agree that reading several thousand characters in one paragraph is not comfortable. I read the help and thought that rulerformat would work, but after looking over the statusline format it uses I didn't see anything that gives a character count for the current buffer. I've seen that there are plugins that add this, but I'm still dipping my toes into gVim and I'm not sure I want to load random plugins before I understand what they do. I'd prefer to use something built in to vim, but if it doesn't exist it doesn't exist. What should I do to accomplish my goal? If it involves a plugin, do you use it and how well does it work?

    Read the article

  • Windows 7 misses keystrokes from internal keyboard after hibernation (on Acer Aspire 5820)

    - by ron
    I face a very strange symptom on my Acer Aspire laptop (with the factory default Win7 install and divers. Windows update running). After waking the computer from hibernation, it is a pain to type, since on average 5-10 keypresses are missing per 100 presses, using the laptop's keyboard. Steps to reproduce: 1) Power off 2) Power on, wait for system to become usable 3) Open notepad, for 5 times do hit 10x the same character. This gives a similar pattern of 50 chars total: xxxxxxxxxxyyyyyyyyyyaaaaaaaaaassssssssssdddddddddd 4) Optionally repeat. Everything is fine this far. 5) Hibernate. 6) Power on and resume. 7) Repeat steps 3)-4). This time approximately 3-5 character will be missing from each 50 characters. What I ruled out: putting to Sleep or just Locking and resuming from there does not cause problem battery / AC usage does not matter net connection does not matter running processes seem to be the same before and after hibernation key press speed doesn't really matter. For the test I use a nominal 3-5 strokes/second beat. plugging in an external USB keyboard works fine, but the built-in one still misbehaves What could be the problem? How could I diagnose if the keypresses arrive in, but get swallowed at some point? (maybe some nasty keyboard handler hook misbehaves?). Update: It seems that pushing the PowerSmart button and toggling to power saving state fixes the problem. Also, toggling it again back to the original state keeps it fixed. So this may be a fine workaround, but is not a conforming solution.

    Read the article

  • Can't upgrade MySQL Server on new Ubuntu 12.04 install

    - by user179627
    After freshly installing Ubuntu server 12.04, I did the usual apt-get update / apt-get upgrade, which failed for mysql-server-5.5: Setting up mysql-server-5.5 (5.5.31-0ubuntu0.12.04.2) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured I tried a wide variety a approaches suggested by googling, which involved various combinations of apt-get remove/purge/install -f/reinstall, etc., with no luck. I also tried downloading the package directly from launchpad.net and running dpkg -i on it (this had worked for a similar issue with a kernel upgrade), but to no avail. I'm not actually particularly interested in what's going on with mysql, per se (though I will need to figure it out at some time); at this point, my primary concern is that I am unable to apt-get install other packages! What to do?

    Read the article

  • Windows 7 misses keystrokes from internal keyboard after hibernation (on Acer Aspire 5820)

    - by ron
    I face a very strange symptom on my Acer Aspire laptop (with the factory default Win7 install and divers. Windows update running). After waking the computer from hibernation, it is a pain to type, since on average 5-10 keypresses are missing per 100 presses, using the laptop's keyboard. Steps to reproduce: 1) Power off 2) Power on, wait for system to become usable 3) Open notepad, for 5 times do hit 10x the same character. This gives a similar pattern of 50 chars total: xxxxxxxxxxyyyyyyyyyyaaaaaaaaaassssssssssdddddddddd 4) Optionally repeat. Everything is fine this far. 5) Hibernate. 6) Power on and resume. 7) Repeat steps 3)-4). This time approximately 3-5 character will be missing from each 50 characters. What I ruled out: putting to Sleep or just Locking and resuming from there does not cause problem battery / AC usage does not matter net connection does not matter running processes seem to be the same before and after hibernation key press speed doesn't really matter. For the test I use a nominal 3-5 strokes/second beat. plugging in an external USB keyboard works fine, but the built-in one still misbehaves What could be the problem? How could I diagnose if the keypresses arrive in, but get swallowed at some point? (maybe some nasty keyboard handler hook misbehaves?). Update: It seems that pushing the PowerSmart button and toggling to power saving state fixes the problem. Also, toggling it again back to the original state keeps it fixed. So this may be a fine workaround, but is not a conforming solution.

    Read the article

  • Hosting options for data-enabled web application

    - by Hertfordian
    I am independently developing an asp.net business application with a MySQL database. I currently have a Windows web hosting account which includes MySQL and MS SQL as installed supported options. I am not yet finally committed to using MySQL and I want to keep my options open to evaluate MS SQL and possibly other options such as PostGreSQL later when more of the business logic is in place - my data access layer will handle the database connectivity. The web hosting setup I have now is fine for development purposes, but if in future I want to use, say, PostGreSQL Server, and a level of usage of, say, 10,000 hits per day concentrated in business hours, I'm assuming I'll need a dedicated server. But in that case, should I just install PostGreSQL on the dedicated server, or is best practice to have a separate database server - perhaps locked down so that it can only be accessed through the web server? And supposing it was only 2000 hits a day - how would that change things? I'd appreciate it if anyone could point me in the direction of a useful guide to these sorts of issues. Naturally if I start paying for separate servers, I would like to know exactly why I'm doing it and what the performance issues and thresholds are.

    Read the article

  • Faster caching method

    - by pataroulis
    I have a service that provides HTML code which at some point it is not updated anymore. The code is always generated dynamically from a database with 10 million entries so each HTML code page rendering searches there for say 60 or 70 of those entries and then renders the page. So, for those expired pages, I want to use a caching system which will be VERY simple (like just enter a record with the rendered HTML and (if I need) remove it). I tried to do it file-based but the search for the existence of a file and then passing it through php to actually render it , seems like too much for what I want to do. I was thinking of doing it on mysql with a table with MEDIUMBLOBs (each page is around 100k). It would hold about 150000 such records (for now, at least). My question is: Would it be faster to let mysql do the lookup of the file and the passing to php or is the file-based approach faster? The lookup code for the file based version looks like this: $page = @file_get_contents(getCacheFilename($pageId)); if($page!=NULL) { echo $page; } else { renderAndCachePage($pageId); } which does one lookup whether it finds the file or not. The mysql table would just have an ID (the page id) and the blob entry. The disk of the system is a simple SATA raid 1 , the mysql daemon can grab up to 2.5GB of memory (i have a proxy running too, eating the rest of the 16GB of the machine. ) In general the disk is quite busy already. My not using PEAR cache, is because I think (please feel free to correct me on this) it adds overhead I do not need because the page rendering code is called about 2M times per day and I wouldn't want to go through the whole code each time (and yes, I have eaccelerator to cache the code too). Any pointer to what direction I should go, would be greatly welcome. Thanks!

    Read the article

  • ZFS Data Loss Scenarios

    - by Obtuse
    I'm looking toward building a largish ZFS Pool (150TB+), and I'd like to hear people experiences about data loss scenarios due to failed hardware, in particular, distinguishing between instances where just some data is lost vs. the whole filesystem (of if there even is such a distinction in ZFS). For example: let's say a vdev is lost due to a failure like an external drive enclosure losing power, or a controller card failing. From what I've read the pool should go into a faulted mode, but if the vdev is returned the pool should recover? or not? or if the vdev is partially damaged, does one lose the whole pool, some files, etc.? What happens if a ZIL device fails? Or just one of several ZILs? Truly any and all anecdotes or hypothetical scenarios backed by deep technical knowledge are appreciated! Thanks! Update: We're doing this on the cheap since we are a small business (9 people or so) but we generate a fair amount of imaging data. The data is mostly smallish files, by my count about 500k files per TB. The data is important but not uber-critical. We are planning to use the ZFS pool to mirror 48TB "live" data array (in use for 3 years or so), and use the the rest of the storage for 'archived' data. The pool will be shared using NFS. The rack is supposedly on a building backup generator line, and we have two APC UPSes capable of powering the rack at full load for 5 mins or so.

    Read the article

  • Online FTP or file sharing service [on hold]

    - by Frede
    We need to share large files with clients, e.g. clients upload a large file, we modify it and later make it available for download. Up until now we've used FTP but this has a number of drawbacks. A lot of management of files and setting up accounts etc. We are therefore considering online alternatives. Requirements: Cheap, 8-) Easy to use, ideally just requiring a web browser, but also possible for power users to connect e.g. via FTPS/SFTP No registration requried for users to upload/download files. We ourselves of course need to be able to login an view uploaded files and upload new files. No per user fee High bandwidth. As files may be GBs in size both upload and download speed cannot be too slow Secure. Encryption during upload/download. No way for users to access uploaded files. Once a user has uploaded a file they (or anyone else besides us) should be able to access the file. To download files users get a link with a password. Ideally the link expires after a set time. No software installation We do NOT need any sync features, backup, versioning etc. Just a quick, easy, secure way for us to share files with our clients. Services like JustCloud, DriveHQ etc seems bloated and "too much" for what we need. What other alternatives exist? Thanks!

    Read the article

  • How to securely control access to a backend key server?

    - by andy
    I need to securely encrypt data in my database so that if the database is dumped, hackers are unable to decrypt the data. I'm planning on creating a simple key server on a different machine, and allowing the DB server access to it (restricted by IP address on the key server to permit the DB server). The key server would contain the key required to encrypt/decrypt data. However, if a hacker were able to get a shell on the DB server, they could request the key from the key server and therefore decrypt the data in the database. How could I prevent this (assuming all firewalls are in place, DB is not connected directly to the internet, etc)? i.e. is there some method I could use that could secure a request from the DB server to the key server so that even if a hacker had a shell on the DB server they'd be unable to make those same requests? Signed requests from the DB server could make issuing these requests less trivial - I suppose that'd help increase the amount of time it'd take to compromise the key server, something a hacker probably wouldn't have much of. As far as I can see, if someone can get a shell on the DB server everything's lost anyway. This could be mitigated by using one key per data item in the DB so at least there's not a single "master" key, but multiple keys that the hacker would need to access. What would be a secure method of ensuring requests from the DB server to the key server were authentic and could be trusted?

    Read the article

  • I really need help resolving a Window Vista BSOD (Blue Screen Crash) on my desktop

    - by anonymous
    Hi, thanks for taking the time to read this. I'll get straight to the details. My desktop is on the fritz; it keeps going to blue screen with the stop message of 0x0000007E immediately after the loading bar of vista, right before transitioning to the account selection screen. My desktop runs on a dual-core 32-bit processor with windows Vista Home(?) installed. I have 3 GB of ram as two separate modules, a 1GB acer module and a 2GB geil module. I have an ati video card, unfortunately I cannot recall the exact name but the chipset is ATI and the manufacturer is Sapphire and the card is on the lower end. My hard drive is 320GB (i think) partitioned into two. The C:\ partition is red lined, while the D:\ partition is still pretty empty. As per the advice of my friend, i tried restarting the system with the graphics card removed. Upon failure, i repeated the process removing one RAM module one at a time, but the system still failed to load. Vista would attempt to repair the system and it would initially report that the system was fixed, but vista really failed to fix the problem. After removing the memory modules, vista started to report it's inability to fix the problem. I tried running on safe mode and the driver listing would always stop at crcdisk.sys. I ran memory diagnostics using the windows memory diagnostic tool found in the screen after vista's failed attempt to fix the problem with no luck. the problem details are as such: Problem Event Name : StartupRepairV2 Problem Signature 01: AutoFailover 02: (vista's version number?) 03: 6 04: 720907 05: 0x7e 06: 0x7e 07: 0 08: 2 09: WrpRepair 10: 0 OS Version: 6.0.6000.2.0.0.256.1 Locale ID 1033 any correct advice would be appreciated as i really need my pc to work so i can work on my projects. kinda sad, but i'm college of computer science and i have no idea what to do :P

    Read the article

  • windows server 2003 speed issues

    - by farzinSH
    I have a HP server with windows server 2003 and 50 windows XP clients. Since a week and a half the networks speed suddenly drop 2-3 times per day. It gets so slow that none of the clients could work with the HIS program installed on them. We tried so many different things such as replacing the hubs,switches and even some wires. Every time one of these changes solves the problem and the network goes back to its normal state. I checked everything. Even when I disconnected all the clients from the server and connected it to just one computer the problem still remained for 2 hours. I just narrowed down the problem to the couple of likely speculations as follows: viruses? (Updated Kaspersky running on the server shows none) server hardware failure? Physical memory usage on the server? (Because the last time the problem occurred none of the changes above solved the issue so I restarted the server an checked the physical memory usage which was 2 GBs. But I noticed it's increasing over time to over 9 GBs...the server has 16 GBs of RAM.) I surfed the internet and got nothing. Any help would do us a lot....thanks in advance

    Read the article

  • Will the removal of NAT (with the use of IPv6) be bad for consumers? [closed]

    - by Jonathan.
    Possible Duplicate: How will IPv6 impact everyday users? (World IPv6 Day) As I understand when we have finally made the switch to IPv6 not only will NAT be unnecessary but it is incompatible with IPv6? Will that mean that ISPs will have to serve multiple IP addresses per customer? Will they provide a range of addresses for each customer or as each device connects will they get an IP address that isn't necessarily near that of the other devices in their house? But overall will this be bad for the Internet users? as surely it will allow ISPs to see exactly how many devices are being used, and so allow them to charge for the use of additional IP addresses? And then if that happens, what happens when you try to connect an extra device to your network? Will it simply not get an IP address? In my home we have about 15-20 devices connected at once, but for places where there are hundreds of devices, it seems like the perfect opportunity for ISPs to charge more? I think I may have it completely wrong, so is there somewhere where there is an explanation of who things will work when IPv6 becomes the norm?

    Read the article

  • Huge discrepancy in Inkscape file size

    - by Keyran
    When using Inkscape to create many pictures with common elements across them, I tend to copy the first SVG file I have created as many times as I need pictures, and then edit the copies. If I reuse files across projects, it can result in a file being copied and modified tens to hundreds of files. I have recently realized that the latest copies have a size between 29 and 60 MB, slowing my computer down significatively. My pictures are very simple, nothing that would normally go over 1 MB in size. As an experiment, I copied the entire content of one of the latest files into a new Inkscape file. I am certain that I have copied the content of the file entirely (I have only one layer and I used the "Select All" option). The new file has a size of 102,2 KB. This would indicate that about 30 MB of data per file is irrelevant to me. What could be the cause of this size difference ? Is there a way to reduce the size of a file without having to copy the content into a new file ? I am using Inkscape 0.48.4 on Debian Unstable. Thanks for any input you might be able to provide !

    Read the article

  • Some Portions of Computer Running Slow (Specifically Graphics)

    - by Mike Gates
    I noticed that a few things are running slow today on my Windows 7 laptop. Specifically, they are: Opening and closing windows takes several seconds for the animation to complete. Windows media player opens fine, but the movies are very laggy MMORPG's, such as RuneScape, are extremely laggy When waking my computer from sleep mode, after entering my password, my desktop takes about 3 seconds to fade in Other than those, everything runs at a normal speed. Things I've done that maybe contributed to this problem: Changed the graphics processor (by plugging in/unplugging the charger) [however, no matter how I change the graphics, I'm still getting this lagginess] Installed AdBlock, a Firefox addon [I recently removed it, and I'm still experiencing this problem] Went into Advanced System Settings, Clicked Settings, and unchecked a few visual things (such as the animation for opening and closing windows) [sure, this got rid of the opening/closing windows lag, but I like that little animation - plus that leaves all the other lag problems I'm experiencing] So, does anyone have any ideas/fixes? If so, please respond. Thank you. Some Other Information: I'm on a HP Pavillion dv7 laptop, 4285 Entertainment PC, with: intel CORE i5 inside, ATI Mobility Radeon Premium Graphics, Microsoft DirectX11 Opening and closing of windows: Defined as opening a program (i.e. Firefox) or closing it by hitting the X in the upper-right hand corner. Lately, the animation for opening and closing windows (which is simply either growing from the icon from the taskbar to fill the screen, or shrinking from the screen down towards the icon on the toolbar.) This problem also occurs for minimizing/maximizing windows. Very laggy movies: defined as .avi movie files saved to My Documents which skips several frames per second and seemingly slows down the movie as a whole Extremely laggy games: I tried RuneScape today, and movement in the game was at least 10x slower than it ever has been, even when playing on the lowest detail/graphics Desktop taking 3 seconds to fade in after sleep: in this scenario, I had no other programs running visibly. The computer generally fades to black from the password screen to the desktop in about 1 second, normally. However, it is now taking 3 or more seconds.

    Read the article

  • How can I find which logon script is being run?

    - by user2517266
    I'm having an issue with network drives. Suddenly some computers and users aren't getting their mapped network drives from the logon script. I am NOT a domain admin, I don't have permission to login to the domain controller. And I know very little about Active Directory. The issue seems random, some users this day, different users tomorrow. Some computers run fine and some won't map no matter who logs in. They are mixed OS's XP (SP3), Vista, and 7. I was looking at the domain in windows explorer and I have found the batch file(s) that maps the drives in several locations, how do I know which one is actually being ran? The .bat file is located in \DOMAIN\NETLOGON\script.bat and \DOMAIN\SYSVOL\DOMAIN\scripts\script.bat and \DOMAIN\SYSVOL\DOMAIN\policies\GUID(Right? It's a crazy string)\User\Scripts\Logon\script.bat So, how can I figure out which one is actually being ran per computer or user? Cause they are all slightly different from each other and one of them doesn't map properly. Do all the files in NETLOGON get ran? Cause there are 15+ files in there. Or is it specified in Group Policy which one(s) get ran? EDIT: I am able to access a program called Active Directory Users and Computers, but the properties tab for any user is blank for the logon script.

    Read the article

  • [deb-5.0] Setup DNS on my server so I can put my IPs in as nameservers of my domain provider

    - by Maurycy Zarzycki
    Basically, my unmanaged VPS provider doesn't supply me with nameserver which I can use with my domain provider to route domain to my server. As I've been told: You need to configure the custom DNS server in your VPS, to setup the custom nameservers. Please refer the following article that would help: http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch18_:_Configuring_DNS Once you configure the nameserver records, please update the domain registrar panel with the custom nameserver details. I tried to follow this guide but it seems to be a bit outdated, and I am complete newb with non-windows systems. I also scanned the google for other articles which could help me with this problem but, alas, nothing I found was of any value for someone who doesn't know this stuff better than his own pockets. I realize this is quite a complex thing to do, but maybe there is some way to automate it? Or a better solution, like a paid service which would act as my nameservers (this one would be interesting), or even hoped to find some company which "rents" people to do stuff like that. Blah, any help will be appreciated, I am at a complete loss here. I can follow some of these steps, but then I soon find that half of the files which are mentioned in the article are somehow not existing anywhere on the server which confuses me, and once we get to the point of creating Zone I can't really decipher all the things written there :/. As per title, my system is Debian 5.0.

    Read the article

  • [deb-5.0] Setup DNS on my server so I can put my IPs in as nameservers of my domain provider

    - by Maurycy Zarzycki
    Basically, my unmanaged VPS provider doesn't supply me with nameserver which I can use with my domain provider to route domain to my server. As I've been told: You need to configure the custom DNS server in your VPS, to setup the custom nameservers. Please refer the following article that would help: http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch18_:_Configuring_DNS Once you configure the nameserver records, please update the domain registrar panel with the custom nameserver details. I tried to follow this guide but it seems to be a bit outdated, and I am complete newb with non-windows systems. I also scanned the google for other articles which could help me with this problem but, alas, nothing I found was of any value for someone who doesn't know this stuff better than his own pockets. I realize this is quite a complex thing to do, but maybe there is some way to automate it? Or a better solution, like a paid service which would act as my nameservers (this one would be interesting), or even hoped to find some company which "rents" people to do stuff like that. Blah, any help will be appreciated, I am at a complete loss here. I can follow some of these steps, but then I soon find that half of the files which are mentioned in the article are somehow not existing anywhere on the server which confuses me, and once we get to the point of creating Zone I can't really decipher all the things written there :/. As per title, my system is Debian 5.0.

    Read the article

  • Seeking faster access/transfer times for accounting application

    - by Markaway
    Our accounting software, Sage 50, has been getting slower to open on workstations and reading the company file. The company file only contains 2 years worth of transactions, and we just cleared out 2011 so the file size has gotten a lot smaller. There are 10 users, 6 of which are on it all day, 4 are on and off throughout the day. Our network is entirely GbE and the switches are set to prioritize traffic on that port number. Watching network traffic, we barely use 40% of the network capability on the workstation, so I don't think that is our bottleneck. Our server contains two older Raptors Sata 2(3GB/s) 150GB in RAID 1. We were considering switching to SSD's, but a lot of what I read says to stay away from MLC's, especially for production environment and definitely avoid putting them in a RAID config. So would upgrading to newer Raptors with SATA 3(6GB/s) offer noticable benefits? What other options are out there that aren't so expensive? Trying to keep it to 200-300 per drive. We need at least 150GB, but going to 250-300GB would be better as it gives us more room to grow. We have about 30% space remaining on what we have now.

    Read the article

  • Reporting SQL Vulnerability [migrated]

    - by Ciaran87Bel
    My first post here so i'll hopefully keep it simple. I have just finished building a CMS targeted at a certain industry and built a test site to see how everything works. Anyway I wrote a program to check for sql injection vulnerabilities and the program followed a blog link to an external website. The program discovered that the external site had a massive vulnerability that left it open to practically anyone who could then access every bit of data on their MYSQL Server and run queries etc. The thing is this external site is the brand leader in their industry and do millions upon millions of sales per annum. I have tried contacting them to let them know and even went as far as contacting the company that built their platform but I was pretty much brushed off and haven't heard back from them. Their database would contain the details of hundreds of thousands of customers and all their data. I could easily make myself site admin etc in a few seconds but they won't listen to me even though I have offered to share the vulnerability with them and help in anyway I can. Is there anything else I can do because it is one of the biggest security risks I have ever personally come across. Is there any other steps I should take to report this? Thanks

    Read the article

  • Webview crash with Garbage Collector ON

    - by user273666
    Hi, I have a very specific web page that causes webview to crash with the Garnage Collector ON (does not crash when OFF). Easy to reproduce: create a document base application, drop a webview, and have the following line (button perhaps). - (void)connectSearch:(id)sender { [[webView mainFrame] loadRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:@"http://apple.com"]]]; } I guess this scenario is only valid while Apple advertises their new iPad. At the bottom of the page there is two video you can watch. Click on the one on the right. When it is playing, click on the Close button (link) top left - which sends #SwapViewPreviousSelection - and that's it, it crashes. I'm just learning about the garbage collector but I suspect something is collected that should not. Any idea what can prevent the crash, other than turning off the garbage collector? Thank you. Here is what I get: Identifier: com.yourcompany.wb Version: 1.0 (1) Code Type: X86-64 (Native) Parent Process: launchd [163] Date/Time: 2010-02-15 12:26:31.069 -0500 OS Version: Mac OS X 10.6.2 (10C540) Report Version: 6 Interval Since Last Report: 432447 sec Crashes Since Last Report: 7 Per-App Interval Since Last Report: 2938 sec Per-App Crashes Since Last Report: 5 Anonymous UUID: CC123A77-1407-444A-9081-8A2B7C15C2B6 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000002, 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Application Specific Information: objc[70635]: garbage collection is ON Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 com.apple.CoreFoundation 0x00007fff82e0a788 CFRetain + 200 1 com.apple.QuartzCore 0x00007fff81677a98 -[CALayer setSublayers:] + 486 2 com.apple.WebCore 0x00007fff87c792a1 WebCore::GraphicsLayerCA::updateSublayerList() + 433 3 com.apple.WebCore 0x00007fff87c7ebd8 WebCore::GraphicsLayerCA::commitLayerChanges() + 840 4 com.apple.WebCore 0x00007fff87c7ed05 WebCore::GraphicsLayerCA::recursiveCommitChanges() + 21 5 com.apple.WebCore 0x00007fff87c7ed31 WebCore::GraphicsLayerCA::recursiveCommitChanges() + 65 6 com.apple.WebCore 0x00007fff87705296 WebCore::FrameView::paintContents(WebCore::GraphicsContext*, WebCore::IntRect const&) + 390 7 com.apple.WebKit 0x00007fff81b3d205 -[WebFrame(WebInternal) _drawRect:contentsOnly:] + 149 8 com.apple.WebKit 0x00007fff81b3ce77 -[WebHTMLView drawSingleRect:] + 455 9 com.apple.WebKit 0x00007fff81b3cc16 -[WebHTMLView drawRect:] + 566 10 com.apple.AppKit 0x00007fff8597b05e -[NSView _drawRect:clip:] + 3566 11 com.apple.AppKit 0x00007fff85978834 -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 2112 12 com.apple.WebKit 0x00007fff81b3dd6b -[WebHTMLView(WebPrivate) _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 299 13 com.apple.AppKit 0x00007fff859791bf -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 4555 14 com.apple.AppKit 0x00007fff859791bf -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 4555 15 com.apple.AppKit 0x00007fff859791bf -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 4555 16 com.apple.AppKit 0x00007fff859791bf -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 4555 17 com.apple.AppKit 0x00007fff859791bf -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 4555 18 com.apple.AppKit 0x00007fff859791bf -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 4555 19 com.apple.AppKit 0x00007fff85977e17 -[NSThemeFrame _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 254 20 com.apple.AppKit 0x00007fff859746bf -[NSView _displayRectIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:] + 2683 21 com.apple.AppKit 0x00007fff858edf37 -[NSView displayIfNeeded] + 969 22 com.apple.AppKit 0x00007fff858e8dde _handleWindowNeedsDisplay + 678 23 com.apple.CoreFoundation 0x00007fff82e74427 __CFRunLoopDoObservers + 519 24 com.apple.CoreFoundation 0x00007fff82e502d4 __CFRunLoopRun + 468 25 com.apple.CoreFoundation 0x00007fff82e4fc2f CFRunLoopRunSpecific + 575 26 com.apple.HIToolbox 0x00007fff88192a4e RunCurrentEventLoopInMode + 333 27 com.apple.HIToolbox 0x00007fff881927b1 ReceiveNextEventCommon + 148 28 com.apple.HIToolbox 0x00007fff8819270c BlockUntilNextEventMatchingListInMode + 59 29 com.apple.AppKit 0x00007fff858be1f2 _DPSNextEvent + 708 30 com.apple.AppKit 0x00007fff858bdb41 -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] + 155 31 com.apple.AppKit 0x00007fff85883747 -[NSApplication run] + 395 32 com.apple.AppKit 0x00007fff8587c468 NSApplicationMain + 364 33 com.yourcompany.wb 0x0000000100001c86 main + 33 (main.m:14) 34 com.yourcompany.wb 0x0000000100001a44 start + 52 Thread 1: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x00007fff8874bbba kevent + 10 1 libSystem.B.dylib 0x00007fff8874da85 _dispatch_mgr_invoke + 154 2 libSystem.B.dylib 0x00007fff8874d75c _dispatch_queue_invoke + 185 3 libSystem.B.dylib 0x00007fff8874d286 _dispatch_worker_thread2 + 244 4 libSystem.B.dylib 0x00007fff8874cbb8 _pthread_wqthread + 353 5 libSystem.B.dylib 0x00007fff8874ca55 start_wqthread + 13 Thread 2: JavaScriptCore: FastMalloc scavenger 0 libSystem.B.dylib 0x00007fff8876d9ee __semwait_signal + 10 1 libSystem.B.dylib 0x00007fff887717f1 _pthread_cond_wait + 1286 2 com.apple.JavaScriptCore 0x00007fff80ae62b3 WTF::TCMalloc_PageHeap::scavengerThread() + 515 3 com.apple.JavaScriptCore 0x00007fff80ae62f9 WTF::TCMalloc_PageHeap::runScavengerThread(void*) + 9 4 libSystem.B.dylib 0x00007fff8876bf8e _pthread_start + 331 5 libSystem.B.dylib 0x00007fff8876be41 thread_start + 13 Thread 3: 0 libSystem.B.dylib 0x00007fff8874c9da __workq_kernreturn + 10 1 libSystem.B.dylib 0x00007fff8874cdec _pthread_wqthread + 917 2 libSystem.B.dylib 0x00007fff8874ca55 start_wqthread + 13 Thread 4: 0 libSystem.B.dylib 0x00007fff88732e3a mach_msg_trap + 10 1 libSystem.B.dylib 0x00007fff887334ad mach_msg + 59 2 com.apple.CoreFoundation 0x00007fff82e507a2 __CFRunLoopRun + 1698 3 com.apple.CoreFoundation 0x00007fff82e4fc2f CFRunLoopRunSpecific + 575 4 com.apple.Foundation 0x00007fff800de4cf +[NSURLConnection(NSURLConnectionReallyInternal) _resourceLoadLoop:] + 297 5 com.apple.Foundation 0x00007fff8005ee99 __NSThread__main__ + 1429 6 libSystem.B.dylib 0x00007fff8876bf8e _pthread_start + 331 7 libSystem.B.dylib 0x00007fff8876be41 thread_start + 13 Thread 5: 0 libSystem.B.dylib 0x00007fff887769e2 select$DARWIN_EXTSN + 10 1 com.apple.CoreFoundation 0x00007fff82e72242 __CFSocketManager + 818 2 libSystem.B.dylib 0x00007fff8876bf8e _pthread_start + 331 3 libSystem.B.dylib 0x00007fff8876be41 thread_start + 13 Thread 6: 0 libSystem.B.dylib 0x00007fff8874c9da __workq_kernreturn + 10 1 libSystem.B.dylib 0x00007fff8874cdec _pthread_wqthread + 917 2 libSystem.B.dylib 0x00007fff8874ca55 start_wqthread + 13 Thread 7: 0 libSystem.B.dylib 0x00007fff8873d426 read + 10 1 com.apple.CoreFoundation 0x00007fff82eb1ae0 __CFSocketRead + 544 2 com.apple.CFNetwork 0x00007fff88bba667 __CFSocketReadWithError(__CFSocket*, unsigned char*, long, CFStreamError*) + 35 3 com.apple.CFNetwork 0x00007fff88bba397 SocketStream::read(__CFReadStream*, unsigned char*, long, CFStreamError*, unsigned char*) + 699 4 com.apple.CoreFoundation 0x00007fff82e3ffac CFReadStreamRead + 540 5 com.apple.CFNetwork 0x00007fff88bd3dc1 HTTPReadFilter::doPlainRead(unsigned char*, long, CFStreamError*, unsigned char*) + 307 6 com.apple.CFNetwork 0x00007fff88bd3c59 HTTPReadFilter::streamRead(__CFReadStream*, unsigned char*, long, CFStreamError*, unsigned char*) + 469 7 com.apple.CoreFoundation 0x00007fff82e3ffac CFReadStreamRead + 540 8 com.apple.CFNetwork 0x00007fff88bd39e6 HTTPNetStreamInfo::streamRead(__CFReadStream*, unsigned char*, long, CFStreamError*, unsigned char*) + 562 9 com.apple.CoreFoundation 0x00007fff82e3ffac CFReadStreamRead + 540 10 com.apple.CFNetwork 0x00007fff88c23892 HTTPReadStream::streamRead(__CFReadStream*, unsigned char*, long, CFStreamError*, unsigned char*) + 82 11 com.apple.CoreFoundation 0x00007fff82e3ffac CFReadStreamRead + 540 12 com.apple.MediaToolbox 0x00007fff86b59a6f FigCFHTTPReadResponse + 855 13 com.apple.CoreFoundation 0x00007fff82eb1503 _signalEventSync + 115 14 com.apple.CoreFoundation 0x00007fff82eb1474 _cfstream_solo_signalEventSync + 116 15 com.apple.CFNetwork 0x00007fff88c228fd HTTPReadStream::streamEvent(unsigned long) + 163 16 com.apple.CoreFoundation 0x00007fff82eb1503 _signalEventSync + 115 17 com.apple.CoreFoundation 0x00007fff82eb1474 _cfstream_solo_signalEventSync + 116 18 com.apple.CoreFoundation 0x00007fff82e52271 __CFRunLoopDoSources0 + 1361 19 com.apple.CoreFoundation 0x00007fff82e50469 __CFRunLoopRun + 873 20 com.apple.CoreFoundation 0x00007fff82e4fc2f CFRunLoopRunSpecific + 575 21 com.apple.CoreFoundation 0x00007fff82e4f9b6 CFRunLoopRun + 70 22 com.apple.CoreMedia 0x00007fff803d4702 FigThreadGlobalNetworkBufferingRunloop + 119 23 libSystem.B.dylib 0x00007fff8876bf8e _pthread_start + 331 24 libSystem.B.dylib 0x00007fff8876be41 thread_start + 13 Thread 8: 0 libSystem.B.dylib 0x00007fff8876d9ee __semwait_signal + 10 1 libSystem.B.dylib 0x00007fff887717f1 _pthread_cond_wait + 1286 2 com.apple.CoreMedia 0x00007fff803d5947 WaitOnCondition + 14 3 com.apple.CoreMedia 0x00007fff803d5b13 FigSemaphoreWaitRelative + 167 4 com.apple.MediaToolbox 0x00007fff86aee8c7 FigAIORequestThread + 398 5 libSystem.B.dylib 0x00007fff8876bf8e _pthread_start + 331 6 libSystem.B.dylib 0x00007fff8876be41 thread_start + 13 Thread 9: 0 libSystem.B.dylib 0x00007fff8874c9da __workq_kernreturn + 10 1 libSystem.B.dylib 0x00007fff8874cdec _pthread_wqthread + 917 2 libSystem.B.dylib 0x00007fff8874ca55 start_wqthread + 13 Thread 10: 0 libSystem.B.dylib 0x00007fff88732e3a mach_msg_trap + 10 1 libSystem.B.dylib 0x00007fff887334ad mach_msg + 59 2 com.apple.CoreFoundation 0x00007fff82e507a2 __CFRunLoopRun + 1698 3 com.apple.CoreFoundation 0x00007fff82e4fc2f CFRunLoopRunSpecific + 575 4 com.apple.CoreFoundation 0x00007fff82e4f9b6 CFRunLoopRun + 70 5 com.apple.QTKit 0x00007fff830d0c49 QTFigVisualContextImageProviderWorkThread + 342 6 libSystem.B.dylib 0x00007fff8876bf8e _pthread_start + 331 7 libSystem.B.dylib 0x00007fff8876be41 thread_start + 13 Thread 11: 0 libSystem.B.dylib 0x00007fff88732e3a mach_msg_trap + 10 1 libSystem.B.dylib 0x00007fff887334ad mach_msg + 59 2 com.apple.CoreFoundation 0x00007fff82e507a2 __CFRunLoopRun + 1698 3 com.apple.CoreFoundation 0x00007fff82e4fc2f CFRunLoopRunSpecific + 575 4 ....audio.toolbox.AudioToolbox 0x00007fff8416267a GenericRunLoopThread::RunLoop() + 42 5 ....audio.toolbox.AudioToolbox 0x00007fff841629f0 GenericRunLoopThread::Run() + 140 6 ....audio.toolbox.AudioToolbox 0x00007fff8412ded5 CAPThread::Entry(CAPThread*) + 67 7 libSystem.B.dylib 0x00007fff8876bf8e _pthread_start + 331 8 libSystem.B.dylib 0x00007fff8876be41 thread_start + 13 Thread 12: 0 libSystem.B.dylib 0x00007fff8876d9ee __semwait_signal + 10 1 libSystem.B.dylib 0x00007fff887717f1 _pthread_cond_wait + 1286 2 com.apple.CoreMedia 0x00007fff803d5947 WaitOnCondition + 14 3 com.apple.CoreMedia 0x00007fff803d5b13 FigSemaphoreWaitRelative + 167 4 com.apple.MediaToolbox 0x00007fff86afd4dd faq_EnqueueSourceDataThread + 44 5 libSystem.B.dylib 0x00007fff8876bf8e _pthread_start + 331 6 libSystem.B.dylib 0x00007fff8876be41 thread_start + 13 Thread 13: 0 libSystem.B.dylib 0x00007fff8876d9ee __semwait_signal + 10 1 libSystem.B.dylib 0x00007fff887717f1 _pthread_cond_wait + 1286 2 com.apple.CoreMedia 0x00007fff803d5947 WaitOnCondition + 14 3 com.apple.CoreMedia 0x00007fff803d5b13 FigSemaphoreWaitRelative + 167 4 com.apple.MediaToolbox 0x00007fff86b9b03b activitySchedulerOnThread + 69 5 libSystem.B.dylib 0x00007fff8876bf8e _pthread_start + 331 6 libSystem.B.dylib 0x00007fff8876be41 thread_start + 13 Thread 14: 0 libSystem.B.dylib 0x00007fff8876d9ee __semwait_signal + 10 1 libSystem.B.dylib 0x00007fff887717f1 _pthread_cond_wait + 1286 2 com.apple.CoreMedia 0x00007fff803d5947 WaitOnCondition + 14 3 com.apple.CoreMedia 0x00007fff803d5b13 FigSemaphoreWaitRelative + 167 4 com.apple.MediaToolbox 0x00007fff86b26d49 audioMentorThread + 6000 5 libSystem.B.dylib 0x00007fff8876bf8e _pthread_start + 331 6 libSystem.B.dylib 0x00007fff8876be41 thread_start + 13 Thread 15: 0 libSystem.B.dylib 0x00007fff8876d9ee __semwait_signal + 10 1 libSystem.B.dylib 0x00007fff887717f1 _pthread_cond_wait + 1286 2 com.apple.CoreMedia 0x00007fff803d5947 WaitOnCondition + 14 3 com.apple.CoreMedia 0x00007fff803d5b13 FigSemaphoreWaitRelative + 167 4 com.apple.MediaToolbox 0x00007fff86b3003a videoMentorThread + 5700 5 libSystem.B.dylib 0x00007fff8876bf8e _pthread_start + 331 6 libSystem.B.dylib 0x00007fff8876be41 thread_start + 13 Thread 16: 0 libSystem.B.dylib 0x00007fff88732e3a mach_msg_trap + 10 1 libSystem.B.dylib 0x00007fff887334ad mach_msg + 59 2 com.apple.CoreFoundation 0x00007fff82e507a2 __CFRunLoopRun + 1698 3 com.apple.CoreFoundation 0x00007fff82e4fc2f CFRunLoopRunSpecific + 575 4 com.apple.CoreFoundation 0x00007fff82e4f9b6 CFRunLoopRun + 70 5 com.apple.QTKit 0x00007fff830cfad4 QTCALayerRendererPendingQWorkLoop + 534 6 libSystem.B.dylib 0x00007fff8876bf8e _pthread_start + 331 7 libSystem.B.dylib 0x00007fff8876be41 thread_start + 13 Thread 17: 0 libSystem.B.dylib 0x00007fff88732e76 semaphore_wait_trap + 10 1 com.apple.VideoToolbox 0x00007fff80487f25 JVTLib_100988 + 11 2 com.apple.VideoToolbox 0x00007fff804d61d8 JVTLib_101021(void*) + 60 3 com.apple.VideoToolbox 0x00007fff804882f4 JVTLib_100971 + 552 4 libSystem.B.dylib 0x00007fff8876bf8e _pthread_start + 331 5 libSystem.B.dylib 0x00007fff8876be41 thread_start + 13 Thread 18: 0 libSystem.B.dylib 0x00007fff88732e76 semaphore_wait_trap + 10 1 com.apple.VideoToolbox 0x00007fff80487f25 JVTLib_100988 + 11 2 com.apple.VideoToolbox 0x00007fff804d61d8 JVTLib_101021(void*) + 60 3 com.apple.VideoToolbox 0x00007fff804882f4 JVTLib_100971 + 552 4 libSystem.B.dylib 0x00007fff8876bf8e _pthread_start + 331 5 libSystem.B.dylib 0x00007fff8876be41 thread_start + 13 Thread 19: 0 libSystem.B.dylib 0x00007fff88732e9a semaphore_timedwait_signal_trap + 10 1 libSystem.B.dylib 0x00007fff887716e2 _pthread_cond_wait + 1015 2 com.apple.CoreVideo 0x00007fff83d2988c CVDisplayLink::waitUntil(unsigned long long) + 252 3 com.apple.CoreVideo 0x00007fff83d28d91 CVDisplayLink::runIOThread() + 619 4 com.apple.CoreVideo 0x00007fff83d28aeb startIOThread(void*) + 139 5 libSystem.B.dylib 0x00007fff8876bf8e _pthread_start + 331 6 libSystem.B.dylib 0x00007fff8876be41 thread_start + 13 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x0000000000000000 rbx: 0x0000000000000000 rcx: 0x0000000000000000 rdx: 0x0000000000000018 rdi: 0x0000000000000000 rsi: 0x000000020070f7d8 rbp: 0x00007fff5fbfbcf0 rsp: 0x00007fff5fbfbce0 r8: 0x00000001010e48d0 r9: 0x000000000000f740 r10: 0x00000001010e42f0 r11: 0x00007fff87d9ca50 r12: 0x0000000101238600 r13: 0x0000000000000000 r14: 0x000000020070f7c0 r15: 0x0000000000000000 rip: 0x00007fff82e0a788 rfl: 0x0000000000000246 cr2: 0x00007fff702c13c8

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >