Search Results

Search found 4514 results on 181 pages for 'totally newbie'.

Page 169/181 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Searching for duplicate records within a text file where the duplicate is determined by only two fie

    - by plg
    First, Python Newbie; be patient/kind. Next, once a month I receive a large text file (think 7 Million records) to test for duplicate values. This is catalog information. I get 7 fields, but the two I'm interested in are a supplier code and a full orderable part number. To determine if the record is dupliacted, I compress all special characters from the part number (except . and #) and create a compressed part number. The test for duplicates becomes the supplier code and compressed part number combination. This part is fairly straight forward. Currently, I am just copying the original file with 2 new columns (compressed part and duplicate indicator). If the part is a duplicate, I put a "YES" in the last field. Now that this is done, I want to be able to go back (or better yet, at the same time) to get the previous record where there was a supplier code/compressed part number match. So far, my code looks like this: Compress Full Part to a Compressed Part and Check for Duplicates on Supplier Code and Compressed Part combination import sys import re import time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ start=time.time() try: file1 = open("C:\Accounting\May Accounting\May.txt", "r") except IOError: print sys.stderr, "Cannot Open Read File" sys.exit(1) try: file2 = open(file1.name[0:len(file1.name)-4] + "_" + "COMPRESSPN.txt", "a") except IOError: print sys.stderr, "Cannot Open Write File" sys.exit(1) hdrList="CIGSUPPLIER|FULL_PART|PART_STATUS|ALIAS_FLAG|ACQUISITION_FLAG|COMPRESSED_PART|DUPLICATE_INDICATOR" file2.write(hdrList+chr(10)) lines_seen=set() affirm="YES" records = file1.readlines() for record in records: fields = record.split(chr(124)) if fields[0]=="CIGSupplier": continue #If incoming file has a header line, skip it file2.write(fields[0]+"|"), #Supplier Code file2.write(fields[1]+"|"), #Full_Part file2.write(fields[2]+"|"), #Part Status file2.write(fields[3]+"|"), #Alias Flag file2.write(re.sub("[$\r\n]", "", fields[4])+"|"), #Acquisition Flag file2.write(re.sub("[^0-9a-zA-Z.#]", "", fields[1])+"|"), #Compressed_Part dupechk=fields[0]+"|"+re.sub("[^0-9a-zA-Z.#]", "", fields[1]) if dupechk not in lines_seen: file2.write(chr(10)) lines_seen.add(dupechk) else: file2.write(affirm+chr(10)) print "it took", time.time() - start, "seconds." ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ file2.close() file1.close() It runs in less than 6 minutes, so I am happy with this part, even if it is not elegant. Right now, when I get my results, I import the results into Access and do a self join to locate the duplicates. Loading/querying/exporting results in Access a file this size takes around an hour, so I would like to be able to export the matched duplicates to another text file or an Excel file. Confusing enough? Thanks.

    Read the article

  • JavaScript (via Greasemonkey) failing to set "title" attributes on <a> tags

    - by rjray
    I have the following (fairly) simple JavaScript snippet that I have wired into Greasemonkey. It goes through a page, looks for <a> tags whose href points to tinyurl.com, and adds a "title" attribute that identifies the true destination of the link. Much of the important code comes from an older (unsupported) Greasemonkey script that quits working when the inner component that held the XPath implementation changed. My script: (function() { var providers = new Array(); providers['tinyurl.com'] = function(link, fragment) { // This is mostly taken from the (broken due to XPath component // issues) tinyurl_popup_preview script. link.title = "Loading..."; GM_xmlhttpRequest({ method: 'GET', url: 'http://preview.tinyurl.com/' + fragment, onload: function(res) { var re = res.responseText.match("<blockquote><b>(.+)</b></blockquote>"); if (re) { link.title = re[1].replace(/\<br \/\>/g, "").replace(/&amp;/g, "&"); } else { link.title = "Parsing failed..."; } }, onerror: function() { link.title = "Connection failed..."; } }); }; var uriPattern = /(tinyurl\.com)\/([a-zA-Z0-9]+)/; var aTags = document.getElementsByTagName("a"); for (i = 0; i < aTags.length; i++) { var data = aTags[i].href.match(uriPattern); if (data != null && data.length > 1 && data[2] != "preview") { var source = data[1]; var fragment = data[2]; var link = aTags[i]; aTags[i].addEventListener("mouseover", function() { if (link.title == "") { (providers[source])(link, fragment); } }, false); } } })(); (The reason the "providers" associative array is set up the way it is, is so that I can expand this to cover other URL-shortening services as well.) I have verified that all the various branches of code are being reached correctly, in cases where the link being examined does and does not match the pattern. What isn't happening, is any change to the "title" attribute of the anchor tags. I've watched this via Firebug, thrown alert() calls in left and right, and it just never changes. In a previous iteration all expressions of the form: link.title = "..."; had originally been: link.setAttribute("title", "..."); That didn't work, either. I'm no newbie to JavaScript OR Greasemonkey, but this one has me stumped!

    Read the article

  • Javascript - undefined cookie value?

    - by Computeras
    Try running the code, I know the problem is in the 1. part. Thanks in advance, P.S. I'm a newbie in JS. <html> <head> <script> { //1. part var Cookies = ""; function createCookie(name,value,days) { if (days) { var date = new Date(); date.setTime(date.getTime()+(days*24*60*60*1000)); var expires = "; expires="+date.toGMTString(); } else var expires = ""; document.cookie = name+"="+value+expires+"; path=/"; } function readCookie(name) { var nameEQ = name + "="; var ca = document.cookie.split(';'); for(var i=0;i < ca.length;i++) { var c = ca[i]; while (c.charAt(0)==' ') c = c.substring(1,c.length); if (c.indexOf(nameEQ) == 0) return c.substring(nameEQ.length,c.length); } return null; } function eraseCookie(name) { createCookie(name,"",-1); } //2. part function saveIt(name) { var x = document.forms['cookieform'].cookievalue.value; if (!x) alert('Please fill in a value in the input box.'); else { Cookies.create(name,x,7); alert('Cookie created'); } } function readIt(name) { alert('The value of the cookie is ' + Cookies[name]); } function eraseIt(name) { Cookies.erase(name); alert('Cookie erased'); } function init() { for (var i=1;i<3;i++) { var x = Cookies['ppkcookie' + i]; if (x) alert('Cookie ppkcookie' + i + '\nthat you set on a previous visit, is still active.\nIts value is ' + x); } } } </script> <body> <form name = "forma"> <input type = "text" name = "cookievalue"> <input type = "button" value = "Spremi" onClick = "saveIt('ppkcookie1')"> <input type = "button" value = "Ispisi" onClick = "readIt('ppkcookie1')"> </form> </body> </html>

    Read the article

  • Add a fadein fade out in jQuery, on multiple conditional statements

    - by Matthew Harwood
    Task: On click of li navigation filter show and hide content with a transitional fadein fade out. Problem I'm just guessing and checking on where to place this fadein//fadeout transition. Furthermore, I feel like my code is too inefficiency because I'm using 4 conditional statements. Would stack lead me in creating a solution to improve the overall logic of this script so I can just make a pretty transition :c? LIVE CODE jQuery Script $(document).ready(function () { //attach a single click listener on li elements $('li.navCenter').on('click', function () { // get the id of the clicked li var id = $(this).attr('id'); // match current id with string check then apply filter if (id == 'printInteract') { //reset all the boxes for muliple clicks $(".box").find('.video, .print, .web').closest('.box').show(); $(".box").find('.web, .video').closest('.box').hide(); $(".box").find('.print').show(); } if (id == 'webInteract') { $(".box").find('.video, .print, .web').closest('.box').show(); $(".box").find('.print, .video').closest('.box').hide(); $(".box").find('.web').show(); } if (id == 'videoInteract') { $(".box").find('.video, .print, .web').closest('.box').show(); $(".box").find('.print, .web').closest('.box').hide() $(".box").find('.video').show(); } if (id == 'allInteract') { $(".box").find('.video, .print, .web').closest('.box').show(); } }); HTML Selected <nav> <ul class="navSpaces"> <li id="allInteract" class="navCenter"> <a id="activeAll" class="navBg" href="#"><div class="relativeCenter"><img src="asset/img/logo30px.png" /><h3>all</h3></div></a> </li> <li id="printInteract" class="navCenter"> <a id="activePrint" class="navBg" href="#"><div class="relativeCenter"><img src="asset/img/print.gif" /><h3>print</h3></div></a> </li> <li id="videoInteract" class="navCenter"> <a id="activeVideo" class="navBg" href="#"><div class="relativeCenter"><img src="asset/img/video.gif" /><h3>video</h3></div></a> </li> <li id="webInteract" class="navCenter"> <a id="activeWeb" class="navBg" href="#"><div class="relativeCenter"><img src="asset/img/web.gif" /><h3>web</h3></div></a> </li> </ul> ps. Sorry for the newbie question

    Read the article

  • c# video equivalent to image.fromstream? Or changing the scope of the following script to allow vide

    - by Daniel
    The following is a part of an upload class in a c# script. I'm a php programmer, I've never messed with c# much but I'm trying to learn. This upload script will not handle anything except images, I need to adapt this class to handle other types of media also, or rewrite it all together. If I'm correct, I realize that using (Image image = Image.FromStream(file.InputStream)) basically says that the scope of the following is Image, only an image can be used or the object is discarded? And also that the variable image is being created from an Image from the file stream, which I understand to be, like... the $_FILES array in php? I dunno, I don't really care about making thumbnails right now either way, so if this can be taken out and still process the upload I'm totally cool with that, I just haven't had any luck getting this thing to take anything but images, even when commenting out that whole part of the class... protected void Page_Load(object sender, EventArgs e) { string dir = Path.Combine(Request.PhysicalApplicationPath, "files"); if (Request.Files.Count == 0) { // No files were posted Response.StatusCode = 500; } else { try { // Only one file at a time is posted HttpPostedFile file = Request.Files[0]; // Size limit 100MB if (file.ContentLength > 102400000) { // File too large Response.StatusCode = 500; } else { string id = Request.QueryString["userId"]; string[] folders = userDir(id); foreach (string folder in folders) { dir = Path.Combine(dir, folder); if (!Directory.Exists(dir)) Directory.CreateDirectory(dir); } string path = Path.Combine(dir, String.Concat(Request.QueryString["batchId"], "_", file.FileName)); file.SaveAs(path); // Create thumbnail int dot = path.LastIndexOf('.'); string thumbpath = String.Concat(path.Substring(0, dot), "_thumb", path.Substring(dot)); using (Image image = Image.FromStream(file.InputStream)) { // Find the ratio that will create maximum height or width of 100px. double ratio = Math.Max(image.Width / 100.0, image.Height / 100.0); using (Image thumb = new Bitmap(image, new Size((int)Math.Round(image.Width / ratio), (int)Math.Round(image.Height / ratio)))) { using (Graphics graphic = Graphics.FromImage(thumb)) { // Make sure thumbnail is not crappy graphic.SmoothingMode = SmoothingMode.HighQuality; graphic.InterpolationMode = InterpolationMode.High; graphic.CompositingQuality = CompositingQuality.HighQuality; // JPEG ImageCodecInfo codec = ImageCodecInfo.GetImageEncoders()[1]; // 90% quality EncoderParameters encode = new EncoderParameters(1); encode.Param[0] = new EncoderParameter(Encoder.Quality, 90L); // Resize graphic.DrawImage(image, new Rectangle(0, 0, thumb.Width, thumb.Height)); // Save thumb.Save(thumbpath, codec, encode); } } } // Success Response.StatusCode = 200; } } catch { // Something went wrong Response.StatusCode = 500; } } }

    Read the article

  • How can I enable a debugging mode via a command-line switch for my Perl program?

    - by Michael Mao
    I am learning Perl in a "head-first" manner. I am absolutely a newbie in this language: I am trying to have a debug_mode switch from CLI which can be used to control how my script works, by switching certain subroutines "on and off". And below is what I've got so far: #!/usr/bin/perl -s -w # purpose : make subroutine execution optional, # which is depending on a CLI switch flag use strict; use warnings; use constant DEBUG_VERBOSE => "v"; use constant DEBUG_SUPPRESS_ERROR_MSGS => "s"; use constant DEBUG_IGNORE_VALIDATION => "i"; use constant DEBUG_SETPPING_COMPUTATION => "c"; our ($debug_mode); mainMethod(); sub mainMethod # () { if(!$debug_mode) { print "debug_mode is OFF\n"; } elsif($debug_mode) { print "debug_mode is ON\n"; } else { print "OMG!\n"; exit -1; } checkArgv(); printErrorMsg("Error_Code_123", "Parsing Error at..."); verbose(); } sub checkArgv #() { print ("Number of ARGV : ".(1 + $#ARGV)."\n"); } sub printErrorMsg # ($error_code, $error_msg, ..) { if(defined($debug_mode) && !($debug_mode =~ DEBUG_SUPPRESS_ERROR_MSGS)) { print "You can only see me if -debug_mode is NOT set". " to DEBUG_SUPPRESS_ERROR_MSGS\n"; die("terminated prematurely...\n") and exit -1; } } sub verbose # () { if(defined($debug_mode) && ($debug_mode =~ DEBUG_VERBOSE)) { print "Blah blah blah...\n"; } } So far as I can tell, at least it works...: the -debug_mode switch doesn't interfere with normal ARGV the following commandlines work: ./optional.pl ./optional.pl -debug_mode ./optional.pl -debug_mode=v ./optional.pl -debug_mode=s However, I am puzzled when multiple debug_modes are "mixed", such as: ./optional.pl -debug_mode=sv ./optional.pl -debug_mode=vs I don't understand why the above lines of code "magically works". I see both of the "DEBUG_VERBOS" and "DEBUG_SUPPRESS_ERROR_MSGS" apply to the script, which is fine in this case. However, if there are some "conflicting" debug modes, I am not sure how to set the "precedence of debug_modes"? Also, I am not certain if my approach is good enough to Perlists and I hope I am getting my feet in the right direction. One biggest problem is that I now put if statements inside most of my subroutines for controlling their behavior under different modes. Is this okay? Is there a more elegant way? I know there must be a debug module from CPAN or elsewhere, but I want a real minimal solution that doesn't depend on any other module than the "default". And I cannot have any control on the environment where this script will be executed...

    Read the article

  • Illegal Start of Expression

    - by Kraivyne
    Hello there, I have just started to learn the very basics of Java programming. Using a book entitled "Programming Video Games for the Evil Genius". I have had an Illegal Start of Expression error that I can't for the life of me get rid of. I have checked the sample code from the book and mine is identical. The error is coming from the for(int i = difficulty; i = 0; i- - ) line. Thanks for helping a newbie out. import javax.swing.*; public class S1P4 {public static void main(String[] args) throws Exception { int difficulty; difficulty = Integer.parseInt(JOptionPane.showInputDialog("How good are you?\n"+ "1 = Great\n"+"10 = Terrible")); boolean cont = false; do { cont = false; double num1 = (int)(Math.round(Math.random()*10)); double num2; do { num2 = (int)(Math.round(Math.random()*10)); } while(num2==0.0); int sign = (int)(Math.round(Math.random()*3)); double answer; System.out.println("\n\n*****"); if(sign==0) { System.out.println(num1+" times "+num2); answer = num1*num2; } else if(sign==1) { System.out.println(num1+" divided by"+num2); answer = num1/num2; } else if(sign==1) { System.out.println(num1+" plus "+num2); answer = num1+num2; } else if(sign==1) { System.out.println(num1+" minus "+num2); answer = num1-num2; } else { System.out.println(num1+" % "+num2); answer = num1%num2; } System.out.println("*****\n"); for(int i = difficulty; i >= 0; i- - ) { System.out.println(i+"..."); Thread.sleep(500); } System.out.println("ANSWER: "+answer); String again; again = JOptionPane.showInputDialog("Play again?"); if(again.equals("yes")) cont = true; } while(cont); } }

    Read the article

  • Problem intialising 2D array

    - by TeeJay
    Ok, so I have a 2D Array that is initialised with values from a file (format: x y z). My file reads in the values correctly but when adding the z value to the matrix/2DArray, I run into a segfault and I have no idea why. It is possibly incorrect use of pointers? I still don't quite have the hang of them yet. This is my intialiser, works fine, even intialises all "z" values to 0. int** make2DArray(int rows, int columns) { int** newArray; newArray = (int**)malloc(rows*sizeof(int*)); if (newArray == NULL) { printf("out of memory for newArray.\n"); } for (int i = 0; i < rows; i++) { newArray[i] = (int*)malloc(columns*sizeof(int)); if (newArray[i] == NULL) { printf("out of memory for newArray[%d].\n", i); } } //intialise all values to 0 for (int i = 0; i < rows; i++) { for (int j = 0; j < columns; j++) { newArray[i][j] = 0; } } return newArray; } This is how I call the initialiser (and problem function). int** map = make2DArray(rows, columns); fillMatrix(&map, mapFile); And this is the problem code. void fillMatrix(int*** inMatrix, FILE* inFile) { int x, y, z; char line[100]; while(fgets(line, sizeof(line), inFile) != NULL) { sscanf(line, "%d %d %d", &x, &y, &z); *inMatrix[x][y] = z; } } From what I can gather through the use of ddd, the problem comes when y gets to 47. The map file has a max "x" value of 47 and a max "y" value of 63, I'm pretty sure I haven't got the order mixed up, so I don't know why the program is segfault-ing? I'm sure it's some newbie mistake...

    Read the article

  • Several C# Language Questions

    - by Water Cooler v2
    1) What is int? Is it any different from the struct System.Int32? I understand that the former is a C# alias (typedef or #define equivalant) for the CLR type System.Int32. Is this understanding correct? 2) When we say: IComparable x = 10; Is that like saying: IComparable x = new System.Int32(); But we can't new a struct, right? or in C like syntax: struct System.In32 *x; x=>someThing = 10; 3) What is String with a capitalized S? I see in Reflector that it is the sealed String class, which, of course, is a reference type, unlike the System.Int32 above, which is a value type. What is string, with an uncapitalized s, though? Is that also the C# alias for this class? Why can I not see the alias definitions in Reflector? 4) Try to follow me down this subtle train of thought, if you please. We know that a storage location of a particular type can only access properties and members on its interface. That means: Person p = new Customer(); p.Name = "Water Cooler v2"; // legal because as Name is defined on Person. but // illegal without an explicit cast even though the backing // store is a Customer, the storage location is of type // Person, which doesn't support the member/method being // accessed/called. p.GetTotalValueOfOrdersMade(); Now, with that inference, consider this scenario: int i = 10; // obvious System.object defines no member to // store an integer value or any other value in. // So, my question really is, when the integer is // boxed, what is the *type* it is actually boxed to. // In other words, what is the type that forms the // backing store on the heap, for this operation? object x = i; Update Thank you for your answers, Eric Gunnerson and Aaronought. I'm afraid I haven't been able to articulate my questions well enough to attract very satisfying answers. The trouble is, I do know the answers to my questions on the surface, and I am, by no means, a newbie programmer. But I have to admit, a deeper understanding to the intricacies of how a language and its underlying platform/runtime handle storage of types has eluded me for as long as I've been a programmer, even though I write correct code.

    Read the article

  • How dangerous is e.preventDefault();, and can it be replaced by keydown/mousedown tracking?

    - by yc
    I'm working on a tracking script for a fairly sophisticated CRM for tracking form actions in Google Analytics. I'm trying to balance the desire to track form actions accurately with the need to never prevent a form from not working. Now, I know that doing something like this doesn't work. $('form').submit(function(){ _gaq.push('_trackEvent', 'Form', 'Submit', $(this).attr('action')) }); The DOM unloads before this has a chance to process. So, a lot of sample code recommends something like this: $('form').submit(function(e){ e.preventDefault(); var form = this; _gaq.push('_trackEvent', 'Form', 'Submit', $(this).attr('action')); //...do some other tracking stuff... setTimeout(function(){ form.submit(); }, 400); }); This is reliable in most cases, but it makes me nervous. What if something happens between e.preventDefault();and when I get around to triggering the DOM based submit? I've totally broken the form. I've been poking around some other analytics implementations, and I've noticed something like this: $('form').mousedown(function(){ _gaq.push('_trackEvent', 'Form', 'Submit', $(this).attr('action')); }); $('form').keydown(function(e){ if(e.which===13) //if the keydown is the enter key _gaq.push('_trackEvent', 'Form', 'Submit', $(this).attr('action')); }); Basically, instead of interrupting the form submit, preempting it by assuming that if someone is mousing down or keying down on Enter, than that form is submitted. Obviously, this will result in a certain amount of false positives, but it completely eliminates use of e.preventDefault();, which in my mind eliminates the risk that I might ever prevent a form from successfully submitting. So, my question: Is it possible to take the standard form tracking snippet and prevent it from ever fully preventing the form from submitting? Is the mousedown/keydown alternative viable? Are there any submission cases it may miss? Specifically, are there other ways to end up submitting besides the mouse and the keyboard enter? And will the browser always have time to process javascript before beginning to unload the page?

    Read the article

  • jQuery .load(), don't show new content until images loaded

    - by Jarred
    Hi. I have been working on a jQuery photo slideshow. It scales the images to the browser size, and slides them left and right. There is no pre-determined size or aspect ratio, the script does everything on the fly. It requires that all images be fully loaded, so it can custom resize each individual image based on it's own aspect ratio ( width():height(), etc ), calculate the width of containing div, and calculate the slide distance from one image to another. As a stand-alone, it works pretty well (despite my lack of skills)! I simply hide the slideshow containing div at (document).ready, allow the images to load, then run the slideshow prep scripts at (window).load. Once this is done, it only then makes the slideshow divs, images, etc appear, properly sized, positioned and ready to roll. The ultimate goal is to be able to load in any number of slideshows without refreshing the page. The point of this is to be able to play uninterrupted background music. I know music on websites is annoying, but the target market likes it, a lot! I am using (target).load(page.php .element, function prepInsertNewShow() { //Prepare slideshow resizeImages(); slideArray(); //Show slideshow (target).fadeIn(); }); and it definitely works! The problem is that I cannot find a way to hold off on preparing and showing the new content until the images have finished loading. It is running the slideshow prep scripts (which are totally dependent on the images being fully loaded), before the images are loaded. This results in a completely jacked up show! What I want to do is this - (target).load(page.php .element, function prepInsertNewShow() { //Wait until images are loaded $('img').load( function() { //Prepare slideshow resizeImages(); slideArray(); //Show slideshow (target).fadeIn(); } }); But this doesn't seem to work, the new content is never shown. You can see a live version here. The initial gallery loads correctly, everything looks good. The only nav link that works is Galleries Engagement, which will load a new show (a containing div with multiple <img> tags). You will see that the images are not centered, the containing div and slide distances are much too small, as they were calculated using images that were not actually loaded. Is there any way I can delay handling and showing new content until it is fully loaded? Any suggestions would be most appreciated, thanks for your time! PS - It just occurred to me while typing this that a decent solution may be to insert "width=x" height="x" into the <img> tags, so the script can work from those values, even if the images have not loaded...hmm...

    Read the article

  • Terminating a long-executing thread and then starting a new one in response to user changing parameters via UI in an applet

    - by user1817170
    I have an applet which creates music using the JFugue API and plays it for the user. It allows the user to input a music phrase which the piece will be based on, or lets them choose to have a phrase generated randomly. I had been using the following method (successfully) to simply stop and start the music, which runs in a thread using the Player class from JFugue. I generate the music using my classes and user input from the applet GUI...then... private playerThread pthread; private Thread threadPlyr; private Player player; (from variables declaration) public void startMusic(Pattern p) // pattern is a JFugue object which holds the generated music { if (pthread == null) { pthread = new playerThread(); } else { pthread = null; pthread = new playerThread(); } if (threadPlyr == null) { threadPlyr = new Thread(pthread); } else { threadPlyr = null; threadPlyr = new Thread(pthread); } pthread.setPattern(p); threadPlyr.start(); } class playerThread implements Runnable // plays midi using jfugue Player { private Pattern pt; public void setPattern(Pattern p) { pt = p; } @Override public void run() { try { player.play(pt); // takes a couple mins or more to execute resetGUI(); } catch (Exception exception) { } } } And the following to stop music when user presses the stop/start button while Player.isPlaying() is true: public void stopMusic() { threadPlyr.interrupt(); threadPlyr = null; pthread = null; player.stop(); } Now I want to implement a feature which will allow the user to change parameters while the music is playing, create an updated music pattern, and then play THAT pattern. Basically, the idea is to make it simulate "real time" adjustments to the generated music for the user. Well, I have been beating my head against the wall on this for a couple of weeks. I've read all the standard java documentation, researched, read, and searched forums, and I have tried many different ideas, none of which have succeeded. The problem I've run into with all approaches I've tried is that when I start the new thread with the new, updated musical pattern, all the old threads ALSO start, and there is a cacophony of unintelligible noise instead of my desired output. From what I've gathered, the issue seems to be that all the methods I've come across require that the thread is able to periodically check the value of a "flag" variable and then shut itself down from within its "run" block in response to that variable. However, since my thread makes a call that takes several minutes minimum to execute (playing the music), and I need to terminate it WHILE it is executing this, there is really no safe way to do so. So, I'm wondering if there is something I'm missing when it comes to threads, or if perhaps I can accomplish my goal using a totally different approach. Any ideas or guidance is greatly appreciated! Thank you!

    Read the article

  • CUDA threads for inner loop

    - by Manolete
    I've got this kernel __global__ void kernel1(int keep, int include, int width, int* d_Xco, int* d_Xnum, bool* d_Xvalid, float* d_Xblas) { int i, k; i = threadIdx.x + blockIdx.x * blockDim.x; if(i < keep){ for(k = 0; k < include ; k++){ int val = (d_Xblas[i*include + k] >= 1e5); int aux = d_Xnum[i]; d_Xblas[i*include + k] *= (!val); d_Xco[i*width + aux] = k; d_Xnum[i] +=val; d_Xvalid[i*include + k] = (!val); } } } launched with int keep = 9000; int include = 23000; int width = 0.2*include; int threads = 192; int blocks = keep+threads-1/threads; kernel1 <<< blocks,threads >>>( keep, include, width, d_Xco, d_Xnum, d_Xvalid, d_Xblas ); This kernel1 works fine but it is obviously not totally optimized. I thought it would be straight forward to eliminate the inner loop k but for some reason it doesn't work fine. My first idea was: __global__ void kernel2(int keep, int include, int width, int* d_Xco, int* d_Xnum, bool* d_Xvalid, float* d_Xblas) { int i, k; i = threadIdx.x + blockIdx.x * blockDim.x; k = threadIdx.y + blockIdx.y * blockDim.y; if((i < keep) && (k < include) ) { int val = (d_Xblas[i*include + k] >= 1e5); int aux = d_Xnum[i]; d_Xblas[i*include + k] *= (float)(!val); d_Xco[i*width + aux] = k; atomicAdd(&d_Xnum[i], val); d_Xvalid[i*include + k] = (!val); } } launched with a 2D grid: int keep = 9000; int include = 23000; int width = 0.2*include; int th = 32; dim3 threads(th,th); dim3 blocks (keep+threads.x-1/threads.x, include+threads.y-1/threads.y); kernel2 <<< blocks,threads >>>( keep, include, width, d_Xco, d_Xnum, d_Xvalid, d_Xblas ); Although I believe the idea is fine, it does not work and I am running out of ideas here. Could you please help me out here? I also think the problem could be in d_Xco which stores the position k in a smaller array , so the order matters, but I can't think of any other way of doing it...

    Read the article

  • Ajax posting to PHP

    - by JQonfused
    Hi guys, I'm testing a jQuery ajax post method on a local Apache 2.2 server with PHP 5.3 (totally new at this). Here are the files, all in the same folder. html body (jQuery library included in head): <form id="postForm" method="post"> <label for="name">Input Name</label> <input type="text" name="name" id="name" /><br /> <label for="age">Input Age</label> <input type="text" name="age" id="age" /><br /> <input type="submit" value="Submit" id="submitBtn" /> </form> <div id="resultDisplay"></div> <script src="queryRequest.js"></script> queryRequest.js $(document).ready(function(){ $('#s').focus(); $('#postForm').submit(function(){ var name = $('#name').val(); var age = $('#age').val(); var URL = "post.php"; $.ajax({ type:'POST', url: URL, datatype:'json', data:{'name': name ,'age': age}, success: function(data){ $('#resultDisplay').append("Value returned.<br />name: "+data.name+" age: "+data.age); }, error: function() { $('resultDisplay').append("ERROR!") } }); }); }); post.php <?php $name = $_POST['name']; $age = $_POST['age']; $return = array('name' => $name, 'age' => $age); echo json_encode($return); ?> After inputting the two fields and pressing 'Submit', the success method is called, text appended, but the values returned from ajax post are undefined. And then after less than a second, the text fields are emptied, and the text appended to the div is gone. Doesn't seem like it's a page refresh, though, since there's no empty page flash. What's going on here? I'm sure it's a silly mistake but Firebug isn't telling me anything.

    Read the article

  • Declare Ajax-webservicecall OnSuccess method anonymous.

    - by user333113
    I write a lot of ajax javascript code and have a little design problem which I'm not totally satisfied with. A lot of times I end up with writing something like this: var typeOfPopup; function RetrievePopupContent(_typeOfPopup) { switch (_typeOfPopup) { case Popup1: WebService.RetrievePopup1Content(param1, param2, DisplayPopup, OnError); break; case Popup2: WebService.RetrievePopup2Content(param1, param2, DisplayPopup, OnError); break; } typeOfPopup = _typeOfPopup; } function DisplayPopup(result) { switch (typeOfPopup) { case Popup1: $get('Popup1').innerHTML = result; break; case Popup2: $get('Popup2').innerHTML = result; break; } Allright. This is a simplified example of what I mean. Often I end up with a lot worse code I believe. What I don't like is the global state variabel outside the functions. One solution I wasn't thinking about when writing this code is to send a context object. I believe you could write something like this: function RetrievePopupContent(typeOfPopup) { switch (typeOfPopup) { case Popup1: WebService.RetrievePopup1Content(param1, param2, DisplayPopup, OnError, typeOfPopup); break; case Popup2: WebService.RetrievePopup2Content(param1, param2, DisplayPopup, OnError, typeOfPopup); break; } } function DisplayPopup(result, typeOfPopup) { switch (typeOfPopup) { case Popup1: $get('Popup1').innerHTML = result; break; case Popup2: $get('Popup2').innerHTML = result; break; } Is this the recommended way? What I also want to do is to be able to write something like this: function RetrievePopupContent(typeOfPopup) { switch (typeOfPopup) { case Popup1: WebService.RetrievePopup1Content(param1, param2, new function(result) { $get('Popup1').innerHTML = result; }, OnError); break; case Popup2: WebService.RetrievePopup2Content(param1, param2, new function(result) { $get('Popup2').innerHTML = result; }, OnError); break; } } Is this possible at all? To declare the callback function anonymous? I am grateful for all opinions on the two options I mentioned myself and also new alternatives to get rid of my global variables I have used this way.

    Read the article

  • cannt run phpunit tests on bash ubuntu 11.10

    - by Mohamad Elbialy
    i'm working with ubuntu 11.10 as root on my local machine, i've installed xampp 1.7.7 and i'm a newbie to ubuntu, while following a tutorial on sitepoint(http://www.sitepoint.com/getting-started-with-pear/) on how to install pear to use PhpUnit, i didnt notice it then, but it seems that i installed or used an existing php version 5.3.6 in CL to do that, also the pear installation was built on this version, while xampp being installed,i now have two versions of php,xampp's 5.3.8 and the 5.3.6, anyway, what i want to do is to use the existing xampp php version and build pear on that, to make all my work through xampp.so my questions are: how to uninstall the php V5.3.6 and it's pear installation? how to link the CL with the php ver. of xampp? how to build the next pear installation on the php ver. of xampp? i want all my web dev. work through xampp, is there anything else i need to unistall, to avoid this confusion? 4. i did the following in attampet to solve the problem: i wrote this in bash: gedit ~/.bashrc i added that to the end of ~/.bashrc file in attempt to change environment path: export PATH=/opt/lampp/bin:$PATH export PATH=/opt/lampp/lib/php:$PATH export PATH=/opt/lampp/lib/php/PHPUnit/pearcmd.php:$PATH i checked the php and pear version using 'php -v' and 'pear list' i got an ouput of: PHP 5.3.8 (cli) (built: Sep 19 2011 13:29:27) Copyright (c) 1997-2011 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies and for pear: Installed packages, channel pear.php.net: ========================================= Package Version State Archive_Tar 1.3.9 stable Console_Getopt 1.3.1 stable PEAR 1.9.4 stable PHPUnit 1.3.2 stable Structures_Graph 1.0.4 stable XML_Util 1.2.1 stable when i run: 'phpunit MessageTest.php': i get PHP Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 PHP Fatal error: require_once(): Failed opening required 'PHP/CodeCoverage/Filter.php' (include_path='.:/php/includes:/opt/lampp/lib/php:/opt/lampp/bin:/opt/lampp/lib/php/PEAR') in /usr/bin/phpunit on line 38 5.i ran the following commands as reported in other questions as a solution to that error: sudo apt-get remove phpunit sudo pear channel-discover pear.phpunit.de sudo pear channel-discover pear.symfony-project.com sudo pear channel-discover components.ez.no sudo pear update-channels sudo pear upgrade-all sudo pear install --alldeps phpunit/PHPUnit sudo apt-get install phpunit and updated include path of php.ini to be: include_path = ".:/php/includes:/opt/lampp/lib/php:/opt/lampp/bin:/opt/lampp/lib/php/PEAR" the php file MessageTest.php: <?php require 'PHPUnit/Autoload.php'; $path = '/opt/lampp/lib/php/PEAR'; set_include_path(get_include_path() . PATH_SEPARATOR . $path); require_once 'PHPUnit/Framework/TestCase.php'; require_once 'Message/Controller/MessageController.php'; class MessageTest extends PHPUnit_Framework_TestCase{ private $message; public function setUp() { $this->message = new MessageController(); } public function tearDown() { } public function testRepeat(){ $yell = "Hello, Any One Out There?"; $this->message->repeat($yell); //sending a request $returnedMessage = $this->message->repeat($yell);//get a response $this->assertEquals($returnedMessage, $yell); } } ?> MessageController class from MessageController.php that i'm trying to test <?php class MessageController { public function actionHelloWorld() { echo 'helloWorld'; } public function repeat($inputString){ return $inputString; } } $msg = new MessageController; ?> I'm not using any PHP framework, i just made the files and classes sounds like it that's all. and still i get the same error: PHP Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 PHP Fatal error: require_once(): Failed opening required 'PHP/CodeCoverage/Filter.php' (include_path='.:/php/includes:/opt/lampp/lib/php:/opt/lampp/bin:/opt/lampp/lib/php/PEAR') in /usr/bin/phpunit on line 38 sure, i'm getting demanding here, i've wasted a lot of time and got really frustrated over this, hope you guys dont get bored reading through my questions, i appreciate your help thanks in advance, Mohamad elbialy

    Read the article

  • Apple Airport Express, Extreme and Time Capsules, BT Home Hub, Wireless Extenders confusion

    - by Jamie Hartnoll
    I post quite frequently in Stack Overflow, but use Superuser less frequently. Mainly as I don't change hardware often and rarely have software issues! I live in a small stone cottage, and have an office in a separate building across a yard. I have a BT Homehub which is located in the cottage and a series of Ethernet cables running across the yard to the office. This is fine for my wired stuff. My main office computers are PCs running Windows 7 Ultimate, and one on Win7 Home, all working fine. I also have an old laptop on Win XP which works fine wirelessly in the house for those evenings in front of the TV catching up on a bit of work. I also have an iPhone and an iPad. Recently, I have been trying to get WiFi in the office so I can use Adobe Shadow (or whatever it now is!) to improve mobile web development efficiency using my iPhone and iPad, so I bought this: http://www.ebuyer.com/393462-zyxel-wre2205-500mbps-powerline-wireless-n300-range-extender-wre2205-gb0101f Thinking that would be lovely just plugged into the socket by the door in the office, extending the perimeter of the WiFi from my Homehub. I can't get it to work properly! If I plug a laptop into its ethernet port I can get it to connect to the Homehub and give me a kinda of wired, wireless extender. If, however, I plug the ethernet port into my home hub, it then seems to extend the network, but only my iOs devices work, and all my wired stuff stops working, and seems to create an infinite loop where windows connects to my homehob, and then rather to the internet, it then connects back to the extender thing. Anyway... in the meantime, I took a fatal trip to the Apple Store, where I purchased an Airport Express... solely for the purpose of hooking my iOs devices up as wireless music players in the house. I knew it had WiFi, but didn't want to use that part as an extender, I didn't think it would work on a Homehub anyway. It doesn't work on a Homehub! I now have a new wireless network in the house, which, when anything connects to it cannot connect to the Internet, so it works ONLY as a wireless music player. I then borrowed some Powerline Adaptors from someone and realised that this whole thing was getting totally out of control! It seems all the technology is out there but it's so complicated to get the right series of devices. To further add to the confusion, I wouldn't mind a network hard drive. I bought one that broke and lost everything, so now we're on to looking at the Apple Time Capsules. So my question is... IF... I buy an Apple Time Capsule, can I: Hook that up to my Homehub, leaving the homehub connected to the Internet so my Hub phones still work, then disable wireless on the homehub Link up my Airport Express to the Time Capsule PROPERLY so it will connect to the Internet Do the above with an Apple TV box should I buy one in future Use the Time Capsule as a network hard drive to store video and music that can be viewed/listened to via my iOS devices/Apple TV/Aiport Express anywhere even with my main PC off (this currently stores all this data) Hope that the IOS devices like the WiFi from the TimeCapsule better than the Homehub and work without extension, or buy another Airport Express to get WiFI in the office. Or... should I buy an Airport Extreme and use a USB hard drive for the network drive?

    Read the article

  • ubuntu wifi disconnection & frustratingly connects to unavailable wifi

    - by ashishsony
    Hi, i have already posted this here: here This has happened before with ubuntu 9.1 Beta2 build too that my wifi disconnects if im idle for 5 minutes... so i cant leave my lappy to download anything... i have to keep on continuously using it.. as soon i leave it idle for abt 5 minutes... wifi disconnects... and the pop up asking for password for wifi pops up...with the password already filled in... i just click on connect and it connects again... so whats the use of asking the password if the pre filled in pass works correctly... and this is happening on ubuntu 10.04 Beta2 too... and the workaround is that just open any menu like the applications menu in the taskbar and keep it open... under this state the ubuntu idleness never activates and so the wifi gets never disconnected... this has been confirmed by me many times.. this seems to be repeating again and again... i dont know why... and the second thing i want to report is that there is no way to report this bug from ubuntu... the launchpad.net talks of going through bug reporting process which is done against a definite package... now how does a user know which package would be causing this error?? there should be a more clear process of reporting such bugs to ubuntu team... thirdly the apport utility that reports crashing apps is totally uselss on 10.04 beta 2... as it collests information and reports that i cant submit the report because i dont have 100 other packages... without updating which i cant submit the report.... surely on a beta build there would be packages continuously being updated... so no system would be reported as fully updated... and so no practical apport reporting is possible?? please address these issues... really frustrating all this ... im a big fan of ubuntu but these things really bug me... and just to add fourthly... the suspend/hibernate feature has never ever worked on my toshiba m70-113 laptop... on any ubuntu version... always have to hard reboot after putting into suspend/hibernate mode.. on windows this has never been the case... why cant ubuntu beat windows in such cases too?? i would really like to see this soon... most importantly, when the router switches off... the wifi signals go off... then why the hell ubuntu keeps on connecting to that very wifi like hell and when doesnt connect shows the prompt to manually connect... with the wifi key already filled in... whats the use of saving the key when it has to ask the question from me either to connect or not?? and if its isnt available... just wait when its available.. i have only option to cancel and if i cancel it wont auto-connect!! what the heck?? one can see in the image that it says "authentication required by wireless network" when there isnt any.. as router has gone down!!

    Read the article

  • disk-to-disk backup without costly backup redundancy?

    - by AaronLS
    A good backup strategy involves a combination of 1) disconnected backups/snapshots that will not be affected by bugs, viruses, and/or security breaches 2) geographically distributed backups to protect against local disasters 3) testing backups to ensure that they can be restored as needed Generally I take an onsite backup daily, and an offsite backup weekly, and do test restores periodically. In the rare circumstance that I need to restore files, I do some from the local backup. Should a catastrophic event destroy the servers and local backups, then the offsite weekly tape backup would be used to restore the files. I don't need multiple offsite backups with redundancy. I ALREADY HAVE REDUNDANCY THROUGH THE USE OF BOTH LOCAL AND REMOTE BACKUPS. I have recovery blocks and par files with the backups, so I already have protection against a small percentage of corrupt bits. I perform test restores to ensure the backups function properly. Should the remote backups experience a dataloss, I can replace them with one of the local backups. There are historical offsite backups as well, so if a dataloss was not noticed for a few weeks(such as a bug/security breach/virus), the data could be restored from an older backup. By doing this, the only scenario that poses a risk to complete data loss would be one where both the local, remote, and servers all experienced a data loss in the same time period. I'm willing to risk that happening since the odds of that trifecta negligibly small, and the data isn't THAT valuable to me. So I hope I have emphasized that I don't need redundancy in my offsite backups because I have covered all the bases. I know this exact technique is employed by numerous businesses. Of course there are some that take multiple offsite backups, because the data is so incredibly valuable that they don't even want to risk that trifecta disaster, but in the majority of cases the trifecta disaster is an accepted risk. I HAD TO COVER ALL THIS BECAUSE SOME PEOPLE DON'T READ!!! I think I have justified my backup strategy and the majority of businesses who use offsite tape backups do not have any additional redundancy beyond what is mentioned above(recovery blocks, par files, historical snapshots). Now I would like to eliminate the use of tapes for offsite backups, and instead use a backup service. Most however are extremely costly for $/gb/month storage. I don't mind paying for transfer bandwidth, but the cost of storage is way to high. All of them advertise that they maintain backups of the data, and I imagine they use RAID as well. Obviously if you were using them to host servers this would all be necessary, but for my scenario, I am simply replacing my offsite backups with such a service. So there is no need for RAID, and absolutely no value in another layer of backups of backups. My one and only question: "Are there online data-storage/backup services that do not use redundancy or offer backups(backups of my backups) as part of their packages, and thus are more reasonably priced?" NOT my question: "Is this a flawed strategy?" I don't care if you think this is a good strategy or not. I know it pretty standard. Very few people make an extra copy of their offsite backups. They already have local backups that they can use to replace the remote backups if something catastrophic happens at the remote site. Please limit your responses to the question posed. Sorry if I seem a little abrasive, but I had some trolls in my last post who didn't read my requirements nor my question, and were trying to go off answering a totally different question. I made it pretty clear, but didn't try to justify my strategy, because I didn't ask about whether my strategy was justifyable. So I apologize if this was lengthy, as it really didn't need to be, but since there are so many trolls here who try to sidetrack questions by responding without addressing the question at hand.

    Read the article

  • Exchange 2003-Exchange 2010 post migration GAL/OAB problem

    - by user68726
    I am very new to Exchange so forgive my newbie-ness. I've exhausted Google trying to find a way to solve my problem so I'm hoping some of you gurus can shed some light on my next steps. Please forgive my bungling around through this. The problem I cannot download/update the Global Address List (GAL) and Offline Address Book (OAB) on my Outlook 2010 clients. I get: Task 'emailaddress' reported error (0x8004010F) : 'The operation failed. An object cannot be found.' ---- error. I'm using cached exchange mode, which if I turn off Outlook hangs completely from the moment I start it up. (Note I've replaced my actual email address with 'emailaddress') Background information I migrated mailboxes, public store, etc. from a Small Business Server 2003 with Exchange 2003 box to a Server 2008 R2 with Exchange 2010 based primarily on an experts exchange how to article. The exchange server is up and running as an internet facing exchange server with all of the roles necessary to send and receive mail and in that capacity is working fine. I "thought" I had successfully migrated everything from the SBS03 box, and due to huge amounts of errors in everything from AD to the Exchange install itself I removed the reference to the SBS03 server in adsiedit. I've still got access to the old SBS03 box, but as I said the number of errors in everything is preventing even the uninstall of Exchange (or the starting of the Exchange Information Store service), so I'm quite content to leave that box completely out of the picture while trying to solve my problem. After research I discovered this is most likely because I failed to run the “update-globaladdresslist” (or get / update) command from the Exchange shell before I removed the Exchange 2003 server from adsiedit (and the network). If I run the command now it gives me: WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Offline Address Book - first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Schedule+ Free Busy Information – first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameArchive" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameContacts" is invalid and couldn't be updated. (Note that I’ve replaced my domain with “domainname.com” and my organization name with “containername”) What I’ve tried I don’t want to use the old OAB, or GAL, I don’t care about either, our GAL and distribution lists needed to be organized anyway, so at this point I really just want to get rid of the old reference to the “first administrative group” and move on. I’ve tried to create a new GAL and tell Exchange 2010 to use that GAL instead of the old GAL, but I'm obviously missing some of the commands or something dumb I need to do to start over with a blank slate/GAL/OAB. I'm very tempted to completely delete the entire "first administrative group" tree from adsiedit and see if that gets rid of the ridiculous reference that no longer exists but I dont want to break something else. Commands run to try to create a new GAL and tell exch10 to use that GAL: New-globaladdresslist –name NAMEOFNEWGAL Set-globaladdresslist GUID –name NAMEOFNEWGAL This did nothing for me except now when I run get-globaladdresslist or with the | FL pipe I see two GALs listed, the “default global address list” and the “NAMEOFNEWGAL” that I created. After a little more research this morning it looks like you can't change/delete/remove the default address list, and the only way to do what I'm trying to do would be to maybe remove the default address list via adsiedit and recreate with a command something like new-GlobalAddressList -Name "Default Global Address List" -IncludedRecipients AllRecipients. This would be acceptable but I've searched and searched and can't find instructions or a breakdown of where exactly the default GAL lives in AD, and if I'd have to remove multiple child references/records. ** Of interest** I'm getting an event ID 9337 in my application log OALGen did not find any recipients in address list ‘\Global Address List. This offline address list will not be generated. -\NAMEOFMYOAB --------- on my Exchange 2010 box, which pretty much to me seems to confirm my suspicion that the empty GAL/OAB is what's causing the Outlook client 0x8004010F error. Help please!

    Read the article

  • How to serve static files for multiple Django projects via nginx to same domain

    - by thanley
    I am trying to setup my nginx conf so that I can serve the relevant files for my multiple Django projects. Ultimately I want each app to be available at www.example.com/app1, www.example.com/app2 etc. They all serve static files from a 'static-files' directory located in their respective project root. The project structure: Home Ubuntu Web www.example.com ref logs app app1 app1 static bower_components templatetags app1_project templates static-files app2 app2 static templates templatetags app2_project static-files app3 tests templates static-files static app3_project app3 venv When I use the conf below, there are no problems for serving the static-files for the app that I designate in the /static/ location. I can also access the different apps found at their locations. However, I cannot figure out how to serve all of the static files for all the apps at the same time. I have looked into using the 'try_files' command for the static location, but cannot figure out how to see if it is working or not. Nginx Conf - Only serving static files for one app: server { listen 80; server_name example.com; server_name www.example.com; access_log /home/ubuntu/web/www.example.com/logs/access.log; error_log /home/ubuntu/web/www.example.com/logs/error.log; root /home/ubuntu/web/www.example.com/; location /static/ { alias /home/ubuntu/web/www.example.com/app/app1/static-files/; } location /media/ { alias /home/ubuntu/web/www.example.com/media/; } location /app1/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app1; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app1.sock; } location /app2/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app2; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app2.sock; } location /app3/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app3; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app3.sock; } # what to serve if upstream is not available or crashes error_page 400 /static/400.html; error_page 403 /static/403.html; error_page 404 /static/404.html; error_page 500 502 503 504 /static/500.html; # Compression gzip on; gzip_http_version 1.0; gzip_comp_level 5; gzip_proxied any; gzip_min_length 1100; gzip_buffers 16 8k; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # Some version of IE 6 don't handle compression well on some mime-types, # so just disable for them gzip_disable "MSIE [1-6].(?!.*SV1)"; # Set a vary header so downstream proxies don't send cached gzipped # content to IE6 gzip_vary on; } Essentially I want to have something like (I know this won't work) location /static/ { alias /home/ubuntu/web/www.example.com/app/app1/static-files/; alias /home/ubuntu/web/www.example.com/app/app2/static-files/; alias /home/ubuntu/web/www.example.com/app/app3/static-files/; } or (where it can serve the static files based on the uri) location /static/ { try_files $uri $uri/ =404; } So basically, if I use try_files like above, is the problem in my project directory structure? Or am I totally off base on this and I need to put each app in a subdomain instead of going this route? Thanks for any suggestions TLDR: I want to go to: www.example.com/APP_NAME_HERE And have nginx serve the static location: /home/ubuntu/web/www.example.com/app/APP_NAME_HERE/static-files/;

    Read the article

  • Why is IIS Anonymous authentication being used with administrative UNC drive access?

    - by Mark Lindell
    My account is local administrator on my machine. If I try to browse to a non-existent drive letter on my own box using a UNC path name: \mymachine\x$ my account would get locked out. I would also get the following warning (Event ID 100, Type “Warning”) 5 times under the “System” group in Event Viewer on my box: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: Logon failure: unknown user name or bad password. I would also get the following warning 3 times: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: The referenced account is currently locked out and may not be logged on to. On the domain controller, Event ID 680 of type “Failure Audit” would appear 4 times under the “Security” group in Event Viewer: Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon account: myaccount Followed by Event ID 644: User Account Locked Out: Target Account Name: myaccount Target Account ID: OURDOMAIN\myaccount Caller Machine Name: MYMACHINE Caller User Name: STAN$ Caller Domain: OURDOMAIN Caller Logon ID: (0x0,0x3E7) Followed by another 4 errors having Event ID 680. Strangely, every time I tried to browse to the UNC path I would be prompted for a user name and password, the above errors would be written to the log, and my account would be locked out. When I hit “Cancel” in response to the user name/password prompt, the following message box would display: Windows cannot find \mymachine\x$. Check the spelling and try again, or try searching for the item by clicking the Start button and then clicking Search. I checked with others in the group using XP and they only got the above message box when browsing to a “bad” drive letter on their box. No one else was prompted for a user name/password and then locked out. So, every time I tried to browse to the “bad” drive letter, behind the scenes XP was trying to login 8 times using bad credentials (or, at least a bad password as the login was correct), causing my account to get locked out on the 4th try. Interestingly, If I tried browsing to a “good” drive such as “c$” it would work fine. As a test, I tried logging on to my box as a different login and browsing the “bad” UNC path. Strangely, my “ourdomain\myaccount” account was getting locked out – not the one I was logged in as! I was totally confused as to why the credentials for the other login were being passed. After much Googling, I found a link referring to some IIS settings I was vaguely familiar with from the past but could not see how they would affect this issue. It was related to the IIS directory security setting “Anonymous access and authentication control” located under: Control Panel/Administrative Tools/Computer Management/Services and Applications/Internet Information Services/Web Sites/Default Web Site/Properties/Directory Security/Anonymous access and authentication control/Edit/Password I found no indication while scouring the Internet that this property was related to my UNC problem. But, I did notice that this property was set to my domain user name and password. And, my password did age recently but I had not reset the password accordingly for this property. Sure enough, keying in the new password corrected the problem. I was no longer prompted for a user name/password when browsing the UNC path and the account lock-outs ceased. Now, a couple of questions: Why would an IIS setting affect the browsing of a UNC path on a local box? Why had I not encountered this problem before? My password has aged several times and I’ve never encountered this problem. And, I can’t remember the last time I updated the “Anonymous access” IIS password it’s been so long. I’ve run the script after a password reset before and never had my account locked-out due to the UNC problem (the script accesses UNC paths as a normal part of its processing). Windows Update did install “Cumulative Security Update for Internet Explorer 7 for Windows XP (KB972260)” on my box on 7/29/2009. I wonder if this is responsible.

    Read the article

  • Strange enduser experience with Liferay, Glassfish and Apache on RedHat

    - by Pete Helgren
    Tried multiple forums to get to the bottom of this. I hope I can get some direction here: Here is the stack I am working with: Red Hat Enterprise Linux Server release 5.6 (Tikanga) Liferay 6.0.6 on Glassfish 3.0.1 MySQL 5.0.77 Apache 2.2.3 The Liferay portal provides a variety of portlets to end users. Static content (web pages), static resources (primarily pdf and mp3 files 1mb - 80mb in size), File upload and download capabilities (primarily 40-60mb mp3 files) and online streaming of those MP3 files. Here is the strange end user experiences: Under normal load: (20-30) users uploading, downloading or streaming files and 20-30 accessing static content (some of it downloads), we see the following: 1) Clicking a link triggers the download of a portion of an MP3 (the portion is a few seconds long). 2) Clicking on a link tiggers the download of the page content rather than rendering. 3) Clicking a link causes the page to dump binary data to the end user rather than the expected content. 4) Clicking a link returns the text of a javascript file rather than rendering the page. Each occurrence is totally random (or appears so). Sometimes it works, sometimes it doesn't. It seems to have no relation to browser or client OS. The strange events seem to occur much more frequently when using an SSL connection rather than regular http. Apache serves as a proxy server only (reverse). It basically passes all the requests through to Glassfish. There isn't any static content proxy served by Apache. We rebuilt the entire stack from scratch and redeployed the portlet wars and still have the same issues. Liferay is running as a single server (not clustered). We disabled mod_cache in Apache. The problems are more frequent as the server load grows. This morning the load is pretty light and we are seeing few problems but the use of the site will grow, particularly tonight around 9pm CST through Wednesday morning. You could try the site (http://preview.bsfinternational.org) during those times and I would expect that you might experience one of the weirdnesses as you randomly click links on the site (https is invoked only when signed in). Again, https seems to exacerbate the issue. This seems very much like a caching issue but I don't know where in the stack to start peeling the onion. Apache? Liferay? Glassfish? MySQL? Maybe even Redhat? We are stumped and most forums we have posted to (LifeRay and Glassfish) have returned very few suggestions. I just need an idea of where to start looking. I understand that we could have a portlet EDIT: Opening the files in a Hex editor that appear to be pages that download rather than render, we see that the first 4000 characters are "junk" and then the "HTTP/1.1 ...." 'normal' header is seen. So something is dumping a jumble of characters up to offset 4000 (when viewing it in a Hex editor). Perhaps a clue? Ideas?

    Read the article

  • Screen -X exec commands not working until manually attached

    - by James Watt
    I have a batch script that starts a java server application inside of a screen. The command looks like this: cd /dir/ && screen -A -m -d -S javascreen java -Xms640M -Xmx1024M -jar javaserverapp.jar nogui After I run the batch script, it starts the server and puts it inside the correct screen. If I list my screens after, I see something like this: user@gtwy /dir $ screen -list There is a screen on: 16180.javascreen (Detached) 1 Socket in /var/run/screen/S-user. However, I have a second batch script that sends automated commands to this server and runs on a different crontab interval. Because of the way the application works, I send commands to it like this (this command tells it to alert connected users "testing 123"): screen -X exec .\!\! echo say testing 123 I've also tried: screen -R -X exec .\!\! echo say testing 123 screen -S javascreen -X exec .\!\! echo say testing 123 Unfortunately, these commands DO NOT WORK. They don't even give me an error message, they just do nothing. HOWEVER - If I manually attach to the screen first (with the below command) and then detach, now I can run any of the above commands flawlessly. I can demonstrate this with a video, if I wasn't clear enough here. screen -r -d Thanks in advance. Update: here is the important parts of /etc/screenrc. It should be totally vanilla, I've never edited this file. # VARIABLES # =============================================================== # No annoying audible bell, using "visual bell" # vbell on # default: off # vbell_msg " -- Bell,Bell!! -- " # default: "Wuff,Wuff!!" # Automatically detach on hangup. autodetach on # default: on # Don't display the copyright page startup_message off # default: on # Uses nethack-style messages # nethack on # default: off # Affects the copying of text regions crlf off # default: off # Enable/disable multiuser mode. Standard screen operation is singleuser. # In multiuser mode the commands acladd, aclchg, aclgrp and acldel can be used # to enable (and disable) other user accessing this screen session. # Requires suid-root. multiuser off # Change default scrollback value for new windows defscrollback 1000 # default: 100 # Define the time that all windows monitored for silence should # wait before displaying a message. Default 30 seconds. silencewait 15 # default: 30 # bufferfile: The file to use for commands # "readbuf" ('<') and "writebuf" ('>'): bufferfile $HOME/.screen_exchange # # hardcopydir: The directory which contains all hardcopies. # hardcopydir ~/.hardcopy # hardcopydir ~/.screen # # shell: Default process started in screen's windows. # Makes it possible to use a different shell inside screen # than is set as the default login shell. # If begins with a '-' character, the shell will be started as a login shell. # shell zsh # shell bash # shell ksh shell -$SHELL # shellaka '> |tcsh' # shelltitle '$ |bash' # emulate .logout message pow_detach_msg "Screen session of \$LOGNAME \$:cr:\$:nl:ended." # caption always " %w --- %c:%s" # caption always "%3n %t%? @%u%?%? [%h]%?%=%c" # advertise hardstatus support to $TERMCAP # termcapinfo * '' 'hs:ts=\E_:fs=\E\\:ds=\E_\E\\' # set every new windows hardstatus line to somenthing descriptive # defhstatus "screen: ^En (^Et)" # don't kill window after the process died # zombie "^["

    Read the article

  • Neighbour table overflow on Linux hosts related to bridging and ipv6

    - by tim
    Note: I already have a workaround for this problem (as described below) so this is only a "want-to-know" question. I have a productive setup with around 50 hosts including blades running xen 4 and equallogics providing iscsi. All xen dom0s are almost plain Debian 5. The setup includes several bridges on every dom0 to support xen bridged networking. In total there are between 5 and 12 bridges on each dom0 servicing one vlan each. None of the hosts has routing enabled. At one point in time we moved one of the machines to a new hardware including a raid controller and so we installed an upstream 3.0.22/x86_64 kernel with xen patches. All other machines run debian xen-dom0-kernel. Since then we noticed on all hosts in the setup the following errors every ~2 minutes: [55888.881994] __ratelimit: 908 callbacks suppressed [55888.882221] Neighbour table overflow. [55888.882476] Neighbour table overflow. [55888.882732] Neighbour table overflow. [55888.883050] Neighbour table overflow. [55888.883307] Neighbour table overflow. [55888.883562] Neighbour table overflow. [55888.883859] Neighbour table overflow. [55888.884118] Neighbour table overflow. [55888.884373] Neighbour table overflow. [55888.884666] Neighbour table overflow. The arp table (arp -n) never showed more than around 20 entries on every machine. We tried the obvious tweaks and raised the /proc/sys/net/ipv4/neigh/default/gc_thresh* values. FInally to 16384 entries but no effect. Not even the interval of ~2 minutes changed which lead me to the conclusion that this is totally unrelated. tcpdump showed no uncommon ipv4 traffic on any interface. The only interesting finding from tcpdump were ipv6 packets bursting in like: 14:33:13.137668 IP6 fe80::216:3eff:fe1d:9d01 > ff02::1:ff1d:9d01: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:9d01, length 24 14:33:13.138061 IP6 fe80::216:3eff:fe1d:a8c1 > ff02::1:ff1d:a8c1: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:a8c1, length 24 14:33:13.138619 IP6 fe80::216:3eff:fe1d:bf81 > ff02::1:ff1d:bf81: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:bf81, length 24 14:33:13.138974 IP6 fe80::216:3eff:fe1d:eb41 > ff02::1:ff1d:eb41: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:eb41, length 24 which placed the idea in my mind that the problem maybe related to ipv6, since we have no ipv6 services in this setup. The only other hint was the coincidence of the host upgrade with the beginning of the problems. I powered down the host in question and the errors were gone. Then I subsequently took down the bridges on the host and when i took down (ifconfig down) one particularly bridge: br-vlan2159 Link encap:Ethernet HWaddr 00:26:b9:fb:16:2c inet6 addr: fe80::226:b9ff:fefb:162c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:120 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5286 (5.1 KiB) TX bytes:726 (726.0 B) eth0.2159 Link encap:Ethernet HWaddr 00:26:b9:fb:16:2c inet6 addr: fe80::226:b9ff:fefb:162c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1801 errors:0 dropped:0 overruns:0 frame:0 TX packets:20 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:126228 (123.2 KiB) TX bytes:1464 (1.4 KiB) bridge name bridge id STP enabled interfaces ... br-vlan2158 8000.0026b9fb162c no eth0.2158 br-vlan2159 8000.0026b9fb162c no eth0.2159 The errors went away again. As you can see the bridge holds no ipv4 address and it's only member is eth0.2159 so no traffic should cross it. Bridge and interface .2159 / .2157 / .2158 which are in all aspects identical apart from the vlan they are connected to had no effect when taken down. Now I disabled ipv6 on the entire host via sysctl net.ipv6.conf.all.disable_ipv6 and rebooted. After this even with bridge br-vlan2159 enabled no errors occur. Any ideas are welcome.

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >