Search Results

Search found 25706 results on 1029 pages for 'issue tracking'.

Page 139/1029 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • Issue with SSH on Ubuntu - Local connection ok, remote connection - Is it me or my ISP?

    - by Benjamin
    I have an issue with a server running Ubuntu 12.04, I am trying to set up a remote connection so I can access the server at my work from out of town. I have installed the SSH server and all that stuff, and I have reassigned the default port from 22 to 3399. A local connection from any OS can connect on the 192.168... address, but in no way can I get a connection on the actual IP address. I believe my configuration is correct, and I will attach it. If I have done something wrong in the config, please tell me and I will make a change to it. I honestly think that the Router that my ISP provided is horrible, and although the port for ssh is forwarded, it might be stopping any traffic coming inbound. Is there anything I can try to verify this? /var/log/auth does not show any error when I connect VIA our static IP. I have included all values not commented out below: (sshd_config) Port 3399 ListenAddress 0.0.0.0 Protocol 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key UsePrivilegeSeparation yes KeyRegenerationInterval 3600 ServerKeyBits 768 SyslogFacility AUTH LogLevel INFO LoginGraceTime 120 PermitRootLogin yes StrictModes yes UseDNS no RSAAuthentication yes IgnoreRhosts yes RhostsRSAAuthentication no HostbasedAuthentication no PermitEmptyPasswords no ChallengeResponseAuthentication no PasswordAuthentication yes GSSAPIAuthentication no X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM yes Am I doing this wrong? port forwarding image

    Read the article

  • Is my motherboard failing, or is there some other issue?

    - by ThatGuy
    So, several months ago I put together my own desktop PC. I set up a dual boot to Windows and Ubuntu. Recently, without changing any settings or installing anything new, the wifi stopped working on windows (I use a wifi adapter). It said it was connected, Network settings showed that it was working and running trouble shooting had no results. My internet still works on any other device. I found that removing the adapter from the motherboard and plugging it back in was the only thing that fixed the problem. Reinstalling the wifi drivers did not help. I purchased a new Wifi adapter, but the problem persists. More recently, I had a much more discouraging development. Sometimes, turning on the computer results in a boot loop: BIOS never starts. Instead, the monitor turns on as if it got a signal, then immediately turns off. This loops on it's own indefinitely until I hold down power, hard reset it, and try again. Sometimes it works, sometimes it doesn't. I haven't tested much on the Ubuntu side. It appears that wifi works at least some of the time, but since I've had issues just getting to BIOS I'm not confident the issue is on the software side. I've also noticed issues with some of the USB ports no longer working, but that seems to be off and on. Finally, as of a few minutes ago, I booted to windows to discover that everything was running very slowly. Slow here is a relative word, but I have a Samsung 840 pro SSD and I'm used to applications running nigh instantly, and it was a solid 3 minutes before any of my applications would load. Anyway, my question is this: Is it likely that my motherboard is failing? Either way, what steps can I take to try and pin down the problem and figure out what to do?

    Read the article

  • Issue with a secure login - Why am I being redirected to the insecure login?

    - by mstrmrvls
    Im having some issues getting a website working at my place of work. The issue was rasised when a "double login" occurred from the secure login site. The second login was actually being prompted by the HTTP domain and not HTTPS. In essence the situation is like this: The user navigates to https://mysite.com/something The login prompt pops up Enter username and password The user is presented with ANOTHER login prompt (IE will say its insecure, and the address bar reflects that) If the user puts in their password the insecure one, they will login to the insecure site. if they hit cancel it will present them with a 401 page Navigating back to https://somesite.com/something will by pass the login prompt and log them in to the secure site automatically (cookie maybe) I'm a bit confused to why the user isnt being logged in properly the first time (redirected to non-ssl) but any consecutive login will be okay? I've been trying to use fiddler to see what is happening after the user puts in their password the first time and trying to get fiddler to automatically login to the site (with no luck) I believe the website in question is using Basic Digest authentication. Thanks for any help

    Read the article

  • Cloned Centos 6.4 websrver for test purpose. Virtual host, .htaccess, redirecting url issue

    - by Shogoot
    I see similar questions, but not my exact challenge. What I have done so far I cloned a prod server over to a vmware to use it as a test server for new functionality I'm going to write. I'm not a sysadmin by trade, but I'm new to this company and I have to do some thing that are outside of my comfort zone (thats a good thing :) ) The prod server has 2 sites on it s1.com and s2.com. In /html/s1/, /html/s2/ there's an .htaccess file under each s*/. Looking like this: RewriteEngine ON RewriteBase / RewriteCond %{QUERY_STRING} id=([0-9]+) RewriteRule ^.* %1.htm RewriteCond %{QUERY_STRING} page=modules/checkout RewriteRule ^.* order.php RewriteCond %{QUERY_STRING} page=pages/sidekart RewriteRule ^.* pages/sidekart.htm The issue is that s1 has a lot of pages that really belongs under a third domain s3, the rule in line 4 and 5 redirects them to /html/s1/. An example of such URL is: s3.com/?page=modules/product&id=521614 I'm trying then to get those URLs (without modifying the URL) to redirect to s3's /html/s3/ server structure, which I set up making a new virtualhost s3 in test servers httpd.conf with a test3.com as servername and changing the other sites to tests1.com and tests2.com, and adding .htaccess also to this s3 root directory, and making a html/s3/ directory structure I populated with an index.html, etc. But, when I take the same URL (s3.com/?page=modules/product&id=521614) changing it to tests3.com/?page=modules/product&id=521614, I get s1's index page showing up in my browser. I've poked around about a day now and i cant figure out why this happens.

    Read the article

  • NGINX + PHP-FPM - Strange issue when trying to display images via php-gd / readfile - Connection wont terminate

    - by anonymous-one
    Ok, to get the details out of the way: The php script can be anything as simple as: <? header('Content-Type: image/jpeg'); readfile('/local/image.jpg'); ?> When I try to execute this via nginx + php-fpm what happens is the image shows up in the browser, here is what happens: IE - The page stays blank for a long period of time, and eventually the image is shown. Chrome - The image shows, but the loading spinner spins and spins for a long period of time. Eventually the debugger will show the image in red as in error, but the image shows up fine. Everything else on the server works great. Its pushing out about 100mbit steady serving static content. So this is definatly a php-fpm related issue. I THINK this may have something to do with the chunked encoding being sent back wrong? Also, I threw in a pause before the image was read, and got the pid of the fpm process, and it looks as tho its terminatly correctly (from strace): shutdown(3, 1 /* send */) = 0 recvfrom(3, "\1\5\0\1\0\0\0\0", 8, 0, NULL, NULL) = 8 recvfrom(3, "", 8, 0, NULL, NULL) = 0 close(3) = 0 The above was dumped long before ie/chrome decided to give up (even tho the image was shown) loading the image. Displaying HTML / text content is fine. Big bodies etc all load nice and fast and terminate right away (as they should). Doing something like: THIS IS THE IMAGE ---BINARY DUMP OF IMAGE--- Works fine too. Any ideas?

    Read the article

  • Tracking upstream svn changes with git-svn and github?

    - by Joseph Turian
    How do I track upstream SVN changes using git-svn and github? I used git-svn to convert an SVN repo to git on github: $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master I then made a few changes in my git repo, committed, and pushed to github. Now, I am on a new machine. I want to take upstream SVN changes, merge them with my github repo, and push them to my github repo. This documentation says: "If you ever lose your local copy, just run the import again with the same settings, and you’ll get another working directory with all the necessary SVN metainfo." So I did the following. But none of the commands work as desired. How do I track upstream SVN changes using git-svn and github? What am I doing wrong? $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master To [email protected]:turian/osqa.git ! [rejected] master -> master (non-fast forward) error: failed to push some refs to '[email protected]:turian/osqa.git' $ git pull remote: Counting objects: 21, done. remote: Compressing objects: 100% (17/17), done. remote: Total 17 (delta 7), reused 9 (delta 0) Unpacking objects: 100% (17/17), done. From [email protected]:turian/osqa * [new branch] master -> origin/master From [email protected]:turian/osqa * [new tag] master -> master You asked me to pull without telling me which branch you want to merge with, and 'branch.master.merge' in your configuration file does not tell me either. Please name which branch you want to merge on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details on the refspec. ... $ /usr//lib/git-core/git-svn rebase warning: refname 'master' is ambiguous. First, rewinding head to replay your work on top of it... Applying: Added forum/management/commands/dumpsettings.py error: Ref refs/heads/master is at 6acd747f95aef6d9bce37f86798a32c14e04b82e but expected a7109d94d813b20c230a029ecd67801e6067a452 fatal: Cannot lock the ref 'refs/heads/master'. Could not move back to refs/heads/master rebase refs/remotes/trunk: command returned error: 1

    Read the article

  • Can I recover lost commits in a SVN repository using a local tracking git-svn branch?

    - by Ian Stevens
    A SVN repo I use git-svn to track was recently corrupted and a backup was recovered. However, a week's worth of commits were lost in the recovery. Is it possible to recover those lost commits using git-svn dcommit on my local git repo? Is it sufficient to run git-svn dcommit with the SHA1 of the last recovered commit in SVN? eg. > svn info http://tracked-svn/trunk | sed -n "s/Revision: //p" 252 > git log --grep="git-svn-id:.*@252" --format=oneline | cut -f1 -d" " 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a > git svn dcommit 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a Or will the git-svn-id need to be stripped from the intended commits? I tried this using --dry-run but couldn't tell whether it would try to submit all commits: > git svn dcommit --verbose --dry-run 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a Committing to http://tracked-svn/trunk ... dcommitted on a detached HEAD because you gave a revision argument. The rewritten commit is: 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a Thanks for your help.

    Read the article

  • Using a bookmarklet to track a package

    - by user307558
    I'm trying to write a bookmarklet that tracks a package in the mail. First it checks to see if the tracking page is open, if not it opens it in a new tab, and then sets the value of the form to the tracking number. Finally, it submits the form. What I'm so far unable to do is set the value of the form in the case where the bookmarklet opens up a new tab. Here's what I have: javascript: (function(){ var trackingNumber = "/*tracking number*/"; var a = document.forms.trackingForm; if ('http://fedex.com/Tracking' == document.location) { trackingForm.trackNbrs.value = trackingNumber; document.forms.trackingForm.submit(); } else { window.open('http://fedex.com/Tracking'); this.window.onload = function(){ //This seems to be the problem trackingForm.trackNbrs.value = trackingNumber; onload(document.forms.trackingForm.submit()); } } })(); Any ideas?

    Read the article

  • Best practices, PHP, tracking millions of impressions per day.

    - by John
    What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable. The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table. How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)

    Read the article

  • Strange build issue using Flex Mojo. Looking for troubleshooting suggestions.

    - by WeeJavaDude
    I have ran into a strange issue and I was hoping for some suggeestion on how to attack the problem. Here is the environment. 1) We develop locally using Flex Builder. 2) We use QuickBuild with FlexMojo 3.4.2 for test builds and production 3) In both cases we don't believe optimization is enabled. What we are seeing is some strange behavior relating to the Ctrl-Enter key when doing testing on IE only in our test environment but not locally. By copying some files over locally I have narrowed the issue down to swf files differences. We do see a difference in the size of swf files in our test environment vs our local environments. Couple things that would help me in troubleshooting would be. 1) Is there a way to know what exactly is in the SWF file? What SWCs are included. 2) How does one compare compile settings between a maven mojo configuration and Flex IDE envioronment? Any thoughts or opinions would be very helpful.

    Read the article

  • Google Analytics - Goals - Advanced Segments - Does it keep cookies for tracking visitors?

    - by Kuko
    Hi there, I am working with Google Analytics - Goals and Funnels for quite sometime, but one thing is is not clear for me. I would very much appreciate if you could help me. We are advertising on several sites rotating several different ads. Our main goal is to collect as many sign-ups (new users) as possible for as low price as possible. We use to advertise the way, that each ad has the same URL where to land, but contains different parameter (e.g. http://www.brautpunkt.de/?ref=fb01 or ..... .de/?ref=adw03). My question is: If I am looking at the goals (Goals Overview), filtering it through Advanced Segments (Landing Page contains /?ref=fb01) is this subset of goals done only by the users who registered in the same session after they came on our site directly from the ad? or also by those users who came first time through this ad (/?ref=fb01), didn't register in the same session but came directly for example on the other day and register than? Thank you very much in advance for your advice. Peter

    Read the article

  • Python 2.7 list of lists manipulation functionality

    - by user3688163
    I am trying to perform several operations on the myList list of lists below and am having some trouble figuring it out. I am very new to Python. myList = [ ['Issue Id','1.Completeness for OTC','Break',3275,33,33725102303,296384802,20140107], ['Issue Id','2.Validity check1 for OTC','Break',3308,0,34021487105,0,20140107], ['Issue Id','3.Validity check2 for OTC','Break',3308,0,34021487105,0,20140107], ['Issue Id','4.Completeness for RST','Break',73376,1,8.24931E+11,44690130,20140107], ['Issue Id','5.Validity check1 for RST','Break',73377,0,8.24976E+11,0,20140107], ['Liquidity','1. OTC - Null','Break',7821 0,2.28291E+11,0,20140110], ['Liquidity','2. OTC - Unmapped','Break',7778,43,2.27712E+11,579021732.8,20140110], ['Liquidity','3. RST - Null','Break',335120,0,1.01425E+12,0,20140110], ['Liquidity','4. RST - Unmapped','Break',334608,512,1.01351E+12,735465433.1,20140110], ['Liquidity','5. RST - Valid','Break',335120,0,1.01425E+12,0,20140110], ['Issue Id','1.Completeness for OTC','Break',3292,33,32397924450,306203929,20140110], ['Issue Id','2.Validity check1 for OTC','Break',3325,0,32704128379,0,20140110], ['Issue Id','3.Validity check2 for OTC','Break',3325,0,32704128379,0,20140110], ['Issue Id','4.Completeness for RST','Break',73594,3,8.5352E+11,69614602,20140110], ['Issue Id','5.Validity check1 for RST','Break',73597,0,8.5359E+11,0,20140110], ['Unlinked Silver ID','DQ','Break',3201318,176,20000000,54974.33386,20140101], ['Missing GCI','DQ','Break',3201336,158,68000000,49351.9588,20140101], ['Missing Book','DQ Break',3192720,8774,3001000000,2740595.484,20140101], ['Matured Trades','DQ','Break',3201006,488,1371000000,152428.8348,20140101], ['Illiquid Trades','1.Completeness Check for range','Break',43122,47,88597695671,54399061.43,20140107], ['Illiquid Trades','2.Completeness Check for non','Break',39033,0,79133622401,0,20140107] ] I am trying to get the result below but do not know how to do so: newList = [ ['Issue Id','1.Completeness for OTC:2.Validity check1 for OTC:3.Validity check2 for OTC','Break',3275,33,33725102303,296384802,20140107], ['Issue Id','4.Completeness for RST:5.Validity check1 for RST','Break',73376,1,8.24931E+11,44690130,20140107], ['Liquidity','1. OTC - Null','Break:2. OTC - Unmapped','Break',7821 0,2.28291E+11,0,20140110], ['Liquidity','3. RST - Null:4. RST - Unmapped:5. RST - Valid','Break',335120,0,1.01425E+12,0,20140110], ['Issue Id','1.Completeness for OTC:2.Validity check1 for OTC:3.Validity check2 for OTC','Break',3292,33,32397924450,306203929,20140110], ['Issue Id','4.Completeness for RST:5. RST - Valid','Break',73594,3,8.5352E+11,69614602,20140110], ['Unlinked Silver ID','DQ','Break',3201318,176,20000000,54974.33386,20140101], ['Missing GCI','DQ','Break',3201336,158,68000000,49351.9588,20140101], ['Missing Book','DQ Break',3192720,8774,3001000000,2740595.484,20140101], ['Matured Trades','DQ','Break',3201006,488,1371000000,152428.8348,20140101], ['Illiquid Trades','1.Completeness Check for range','Break',43122,47,88597695671,54399061.43,20140107], ['Illiquid Trades','2.Completeness Check for non','Break',39033,0,79133622401,0,20140107] ] Rules to create newList. Create a new list in the newList list of lists if the values in the lists meet the following conditions: multiple lists that match on `myList[i][0]` and `myList[i][7]` but with have (1) sums of `myList[i][3]` and `myList[i][4]` and (2) sums of `myList[i][5]` and `myList[i][6]` that are different from each other are just listed as is in newList if multiple lists match on both `myList[i][0]` (this is the type) and `myList[i][7]` (this is the date) are the same then create a new list for each set of lists with mathcing `myList[i][0]` and `myList[i][7]` that have (1) sums of `myList[i][3]` and `myList[i][4]` and (2) sums of `myList[i][5]` and `myList[i][6]` that are different from the other lists with mathcing `myList[i][0]` and `myList[i][7]`. I also am trying to concatenate `myList[i][1]` separated by a ':' for all those lists with matching `myList[i][0]` and `myList[i][7]` and with sums of `myList[i][3]` + `myList[i][4]` and `myList[i][5]` + `myList[i][6]` that match. So essentially for this case only those lists in `myList` with sums of `myList[i][3]` + `myList[i][4]` and `myList[i][5]` + `myList[i][6]` are different from the other lists are then listed in newList. The above newList illustrates these results I am trying to achieve. If anyone has any ideas how to do this they would be greatly appreciated. Thank you!

    Read the article

  • Best practice -- Content Tracking Remote Data (cURL, file_get_contents, cron, et. al)?

    - by user322787
    I am attempting to build a script that will log data that changes every 1 second. The initial thought was "Just run a php file that does a cURL every second from cron" -- but I have a very strong feeling that this isn't the right way to go about it. Here are my specifications: There are currently 10 sites I need to gather data from and log to a database -- this number will invariably increase over time, so the solution needs to be scalable. Each site has data that it spits out to a URL every second, but only keeps 10 lines on the page, and they can sometimes spit out up to 10 lines each time, so I need to pick up that data every second to ensure I get all the data. As I will also be writing this data to my own DB, there's going to be I/O every second of every day for a considerably long time. Barring magic, what is the most efficient way to achieve this? it might help to know that the data that I am getting every second is very small, under 500bytes.

    Read the article

  • xubuntu 12.04 screen regularly stops refreshing, refreshing resumes after un-/re-maximizing a window

    - by user68477
    My screen frequently stops completely refreshing. I can make it resume refreshing by un-maximizing/re-maximize a window or by switching workspace (the un-/re-maximizing works every time. Switching workspace sometimes has to be done a couple of times). The immediate impression is that the system is frozen: there is apparently no reaction to anything I do but interestingly window title bar will change, if I switch application with (i.e alt+tab or browse through folders) I saw an identical issue in ubuntu 10.04, though a lot less frequent, I never saw this in ubuntu 12.04 (which I have been using the last 4-5 months). After switching to Xubuntu I'm seeing this again and more frequently. The specific reason I'm not sure this is a bug: I installed gnome-control-center which dragged in tons of packages. This was while trying to fix dual-screen setup. I believe the issue surfaced after this. I later meticulously removed every package from this batch (purge) in the hope that every setting would also be removed. But the issue has persisted. Another issue happened at the same time, it may be totally unrelated but it feels as if it is the same basic issue: the screen resolution of the greeter became less than the expected 1680x1050 and often after login there's just a blank and totally unresponsive wallpaper without panel so I have to force reboot. When the login is successful it's very clear that it works hard to determine the correct resolution which is achieved after a few blink to black screens. My questions: 1) Is this a settings issue or a bug? 2) How do I begin to research the issue - could I perhaps some way reset xubuntu/xfce to default. 3) If this is a bug where would be the most appropriate place report this? System: Thinkpad T500 ati radeon HD 3650 $ fglrxinfo display: :0.0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Mobility Radeon HD 3650 OpenGL version string: 3.3.11627 Compatibility Profile Context $ uname -a Linux srvname 3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux xfce 4.10 Compiz 0.9.7.8

    Read the article

  • Screen recorder for tracking back your steps while debugging?

    - by hstoerr
    I am wondering whether there is a screen recorder that is usable for checking what exactly you did a couple of minutes ago while debugging, or just checking what value for a variable was displayed the last time you hit that breakpoint etc. (Of course this latter question would be something terrific for an IDE to implement, but I've never seen something like that so far. :-) For this you would need a screen recorder that could record all day and preferably automatically delete the recording the recording that is, say, older than an hour. Probably the recording format should be something that is good for screen recording instead of natural scenes. Do you know a screen recorder like that or at least one that comes close?

    Read the article

  • DB Design - Linking to a parent without circular reference issues

    - by zSysop
    Hi all, I'm having trouble coming up with a solution for the following issue. Lets say i have a db that looks something like the following: Issue Table Id | Details | CreateDate | ClosedDate Issue Notes Table Id | ObjectId | Notes | NoteDate Issue Assignment Table Id | ObjectId | AssignedToId| AssignedDate I'd like allow the linking of an issue to another issue. I thought about adding a column to the Issue table called ParentIssueId and that would allow me the ability to link issues, but i foresee circular references occurring within the issue table if i go through with this implementation. Is there a better way to go about doing this, and if so, how? Thanks

    Read the article

  • Looking for an issue tracker / project management software that automatically manages start/completion dates based on priority/relationships

    - by user361910
    So, a little background. We are a small company with a half-dozen developers. We have been evaluating many project management / issue tracking software packages (TRAC, Redmine, FogBugz, etc) and trying to create a decent process/workflow for managing projects, adding features, fixing bugs, etc. I'd like to think our requirements are similar to most other companies our size. Essentially, what this comes down to is 1) An easy way for the PM and developers to track projects, issues, bugs, etc 2) An easy way for the PM and admin/executives to get a birds-eye view of progress and easily manage timelines, schedules, and priorities. After trying TRAC, we moved to Redmine. We found Redmine to be easier than track to administer and the ability to have sub-projects and sub-tickets is great. However, the big problem we ran into is the fact that it is very difficult to manage schedules and timelines. It seems like it would be incredibly time-intensive to manage because you have to manually enter a start date, estimated time, and end date for each ticket, project, etc. So if you setup a month's schedule based on priorities, what are you supposed to do when a particular ticket/issue/subproject takes up more time than was estimated. Right now, it appears I would have to go back in and MANUALLY change the start/end date of every single item. What would be ideal is to be able to set priorities/dependencies and estimated time on tickets/milestones, and have the software automatically manage the start/end dates. Does anyone know how to get Redmine to do this, or recommend a different software package that can do something like this!

    Read the article

  • List with non-null elements ends up containing null. A synchronization issue?

    - by Alix
    Hi. First of all, sorry about the title -- I couldn't figure out one that was short and clear enough. Here's the issue: I have a list List<MyClass> list to which I always add newly-created instances of MyClass, like this: list.Add(new MyClass()). I don't add elements any other way. However, then I iterate over the list with foreach and find that there are some null entries. That is, the following code: foreach (MyClass entry in list) if (entry == null) throw new Exception("null entry!"); will sometimes throw an exception. I should point out that the list.Add(new MyClass()) are performed from different threads running concurrently. The only thing I can think of to account for the null entries is the concurrent accesses. List<> isn't thread-safe, after all. Though I still find it strange that it ends up containing null entries, instead of just not offering any guarantees on ordering. Can you think of any other reason? Also, I don't care in which order the items are added, and I don't want the calling threads to block waiting to add their items. If synchronization is truly the issue, can you recommend a simple way to call the Add method asynchronously, i.e., create a delegate that takes care of that while my thread keeps running its code? I know I can create a delegate for Add and call BeginInvoke on it. Does that seem appropriate? Thanks.

    Read the article

  • How to make CMake to generate KDE-style makefiles with progress tracking?

    - by mhambra
    Generating moc_krvfshandler.cpp [ 2%] [ 2%] Building CXX object krusader/GUI/CMakeFiles/GUI.dir/GUI_automoc.o Building CXX object krusader/DiskUsage/CMakeFiles/DiskUsage.dir/DiskUsage_automoc.o [ 2%] Building CXX object krusader/Dialogs/CMakeFiles/Dialogs.dir/Dialogs_automoc.o Generating moc_krmousehandler.cpp [ 3%] Building CXX object krusader/ActionMan/CMakeFiles/ActionMan.dir/actionman.o Generating moc_packjob.cpp [ 4%] Building CXX object krusader/Dialogs/CMakeFiles/Dialogs.dir/krsqueezedtextlabel.o Generating moc_krpreviews.cpp [ 4%] Built target VFS_automoc [ 4%] Generating moc_krvfsmodel.cpp Wanted the same thing.

    Read the article

  • Release Notes for 7/6/2012

    Happy belated 4th of July, everyone! Here are the notes for this week’s release on CodePlex: Implemented performance improvements to Git repositories. Fixed an issue that caused the final “click here” download link to fail in projects that display ads. Fixed an issue for certain projects that made it impossible to edit releases. Fixed an issue where the URL for a diff of a file would not take users to the diff in question. Fixed a rare issue that prevented a small subset of projects from modifying their project details. Fixed an issue where scrollbars were missing in our side-by-side diff viewer. Super- and sub-scripts now work properly in documentation. Addressed several usability issues around the diff viewer. Fixed an issue where the scrollbar could disappear in the advanced issue tracker if a user opens a modal dialog. Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • Issue in setting up VPN connection (IKEv1) using android (ICS vpn client) with Strongswan 4.5.0 server

    - by Kushagra Bhatnagar
    I am facing issues in setting up VPN connection(IKEv1) using android (ICS vpn client) and Strongswan 4.5.0 server. Below is the set up: Strongswan server is running on ubuntu linux machine which is connected to some wifi hotspot. Using the steps in this guide link, I generated CA, server and client certificate. Once certificates are generated, following (clientCert.p12 and caCert.pem) are sent to mobile via mail and installed on android device. Below are the ip addresses assigned to various interfaces Linux server wlan0 interface ip where server is running: 192.168.43.212, android device eth0 interface ip address: 192.168.43.62; Android device is also attached with the same wifi hotspot. On the Android device, I uses IPsec Xauth RSA option for setting up VPN authentication configuration. I am using the following ipsec.conf configuration: # basic configuration config setup plutodebug=all # crlcheckinterval=600 # strictcrlpolicy=yes # cachecrls=yes nat_traversal=yes # charonstart=yes plutostart=yes # Add connections here. # Sample VPN connections conn ios1 keyexchange=ikev1 authby=xauthrsasig xauth=server left=%defaultroute leftsubnet=0.0.0.0/0 leftfirewall=yes leftcert=serverCert.pem right=192.168.43.62 rightsubnet=10.0.0.0/24 rightsourceip=10.0.0.2 rightcert=clientCert.pem pfs=no auto=add      With the above configurations when I enable VPN on android device, VPN connection is not successful and it gets timed out in Authentication phase. I ran wireshark on both the android device and strongswan server, from the tcpdump below are the observations. Initially Identity Protection (Main mode) exchanges happens between device and server and all are successful. After all successful Identity Protection (Main mode) exchanges server is sending Transaction (Config mode) to device. In reply android device is sending Informational message instead of Transaction (Config mode) message. Further server is keep on sending Transaction (Config mode) message and device is again sending Identity Protection (Main mode) messages. Finally timeout happens and connection fails. I also capture Strongswan server logs and below are the snippets from the server logs which also verifies the same(described above). Apr 27 21:09:40 Linux pluto[12105]: | **parse ISAKMP Message: Apr 27 21:09:40 Linux pluto[12105]: | initiator cookie: Apr 27 21:09:40 Linux pluto[12105]: | 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | responder cookie: Apr 27 21:09:40 Linux pluto[12105]: | 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | next payload type: ISAKMP_NEXT_HASH Apr 27 21:09:40 Linux pluto[12105]: | ISAKMP version: ISAKMP Version 1.0 Apr 27 21:09:40 Linux pluto[12105]: | exchange type: ISAKMP_XCHG_INFO Apr 27 21:09:40 Linux pluto[12105]: | flags: ISAKMP_FLAG_ENCRYPTION Apr 27 21:09:40 Linux pluto[12105]: | message ID: a2 80 ad 82 Apr 27 21:09:40 Linux pluto[12105]: | length: 92 Apr 27 21:09:40 Linux pluto[12105]: | ICOOKIE: 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | RCOOKIE: 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | peer: c0 a8 2b 3e Apr 27 21:09:40 Linux pluto[12105]: | state hash entry 25 Apr 27 21:09:40 Linux pluto[12105]: | state object not found Apr 27 21:09:40 Linux pluto[12105]: packet from 192.168.43.62:500: Informational Exchange is for an unknown (expired?) SA Apr 27 21:09:40 Linux pluto[12105]: | next event EVENT_RETRANSMIT in 10 seconds for #9 Can anyone please provide update on this issue. Why the VPN connection gets timed out and why the ISAKMP exchanges are not proper between Android and strongswan server.

    Read the article

  • USB Drive Warm Boot Issue. Cold boot = Perfect. Warm boot = "No boot sector found on USB device"

    - by Dan
    I'm experiencing a boot/reboot quirk similar to, yet somewhat different from others posted here. I have two Western Digital "Passport" external USB2 drives (160GB and 320GB) with Ubuntu 11.10 installed. The BIOS on my laptop is set to boot USB drives before the internal drive, and it works well. Either drive will cold-boot Ubuntu 11.10 perfectly. However, at times the computer must be restarted (warm boot) to have software updates or other changes take effect. When I warm-boot the computer, the computer goes through its normal shutdown/restart, then the usual BIOS tests, at which time the USB drive is selected momentarily (as normal), and that's when the trouble starts. I get a "No boot sector found on USB device", and the computer proceeds to boot Windows from the internal drive. If I press F12 early during the restart sequence (allows manual selection of the boot sequence), USB is still shown before the internal drive on the sequence list (as it should be). If I highlight "USB" and press ENTER, I still get a "No boot sector..." error message, but the external USB drive then proceeds to boot Ubuntu normally. I have a third external USB drive (Western Digital 160 GB Media Center) that cold-boots and warm-boots perfectly, so I know everything in the computer is set up properly. It also tells me Ubuntu is set up correctly on the drives, as all three drives were done identically, and at the same time. I've tried formatting the Passport drives NTFS and FAT32, but it didn't change anything. If I power-down the computer, then start it up again, it will boot Ubuntu from either of the Passport drives perfectly - every time. It's as if during a warm boot the Passport drives are somehow looking for the boot loader in the wrong location on the drive - yet either drives works when booting from a cold start. Not a show-stopper .. but frustrating. Computer is a Dell XPS M1530, A12 BIOS (latest), but I don't think the computer plays into the issue because one of the three drives works at all times. It's only the two WD Passport models at issue here. Thanks!

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >