Search Results

Search found 19018 results on 761 pages for 'raw input'.

Page 465/761 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • Set up Gmail with Google apps for own domain

    - by erdomester
    I rent a server from a German company. I have remote access to it as well as WHM and CPanel. I decided to use Google's mail servers for obvious reasons. I am not an admin just an average guy trying to set up what needs to be set up. The problem is I am unable to make the necessary settings. I watched Youtube tutorials, followed written ones as well as Google's help, but there is (at least) one serious problem with my domain settings. The domain console alwasy says Your MX records are incorrect When I check dappwall.com in mxtoolbox.com it says Pref Hostname IP Address TTL 10 mail.dappwall.com 46.4.88.247 24 hrs But this is not the host name. I checked WHM and my hostname is server1.dappwall.com. I can confirm it by typing the hostname command in putty. However, if I do an mx lookup at mxtoolbox.com on server1.dappwall.com or mail.dappwall.com I get Lookup failed after 1 name servers timed out or responded non-authoritatively I ran checks on the google apps toolbox on dappwall.com and two problems emerged: 1.No Google mail exchangers found. Relayhost configuration? 10 mail.dappwall.com In Google Apps > Settings for Gmail > Advanced settings it also says that my current MX records for dappwall.com is Priority Points to 10 MAIL.DAPPWALL.COM. So mail.dappwall.com again. I also have access to a robot provided by the company I rent the server from. Here I see this mail at two places but how should I (if it's necessary) modify this? I set Email routing to Automatically Detect Configuration. 2.There SHOULD be a valid SPF record. "v=spf1 include:_spf.google.com ~all" In the DNS Zone Editor I added this spf record: Name TTL Class Type Record dappwall.com. 1440 IN TXT v=spf1 include:_spf.google.com ~all In the cPanel Email Authentication page it says SPF: Status: Enabled Warning: cPanel is unable to verify that this server is an authoritative nameserver for dappwall.com. [?] Your current raw SPF record is : v=spf1 include:_spf.google.com ~all How can I confirm that my server is an authoritative nameserver for dappwall.com? In WHM Service Configuration Mailserver selection Dovecot was set but I disabled it (i don't know if that's ok). What am I missing here? Where is that mail.dappwall.com coming from?

    Read the article

  • Java Compiler Creation Help..Please

    - by Brian
    I need some help with my code here...What we are trying to do is make a compiler that will read a file containing Machine Code and converting it to 100 lines of 4 bits example: this code is the machine code being converting to opcode and operands. I need some help please.. thanks 799 798 198 499 1008 1108 899 909 898 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Everything compiles but when I go and run my Test.java I get the following OutPut: Exception in thread "main" java.util.NoSuchElementException: No line found at java.util.Scanner.nextLine(Scanner.java:1516) at Compiler.FirstPass(Compiler.java:22) at Compiler.compile(Compiler.java:11) at Test.main(Test.java:5) Here is my class Compiler: import java.io.*; import java.io.DataOutputStream; import java.util.NoSuchElementException; import java.util.Scanner; class Compiler{ private int lc = 0; private int dc = 99; public void compile(String filename) { SymbolList symbolTable = FirstPass(filename); SecondPass(symbolTable, filename); } public SymbolList FirstPass(String filename) { File file = new File(filename); SymbolList temp = new SymbolList(); int dc = 99; int lc = 0; try{ Scanner scan = new Scanner(file); String line = scan.nextLine(); String[] linearray = line.split(" "); while(line!=null){ if(!linearray[0].equals("REM")){ if(!this.isInstruction(linearray[0])){ linearray[0]=removeColon(linearray[0]); if(this.isInstruction(linearray[1])){ temp.add(new Symbol(linearray[0], lc, null)); lc++; } else { temp.add(new Symbol(linearray[0], dc, Integer.valueOf((linearr\ ay[2])))); dc--; } } else { if(!linearray[0].equals("REM")) lc++; } } try{ line = scan.nextLine(); } catch(NoSuchElementException e){ line=null; break; } linearray = line.split(" "); } } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } return temp; } public String makeFilename(String filename) { return filename + ".ex"; } public String removeColon(String str) { if(str.charAt(str.length()-1) == ':'){ return str.substring(0, str.length()-1); } else { return str; } } public void SecondPass(SymbolList symbolTable, String filename){ try { int dc = 99; //Open file for reading File file = new File(filename); Scanner scan = new Scanner(file); //Make filename of new executable file String newfile = makeFilename(filename); //Open Output Stream for writing new file. FileOutputStream os = new FileOutputStream(filename); DataOutputStream dos = new DataOutputStream(os); //Read First line. Split line by Spaces into linearray. String line = scan.nextLine(); String[] linearray = line.split(" "); while(scan.hasNextLine()){ if(!linearray[0].equals("REM")){ int inst=0, opcode, loc; if(isInstruction(linearray[0])){ opcode = getOpcode(linearray[0]); loc = symbolTable.searchName(linearray[1]).getMemloc(); inst = (opcode*100)+loc; } else if(!isInstruction(linearray[0])){ if(isInstruction(linearray[1])){ opcode = getOpcode(linearray[1]); if(linearray[1].equals("STOP")) inst=0000; else { loc = symbolTable.searchName(linearray[2]).getMemloc(); inst = (opcode*100)+loc; } } if(linearray[1].equals("DC")) dc--; } System.out.println(inst); dos.writeInt(inst); linearray = line.split(" "); } if(scan.hasNextLine()) { line = scan.nextLine(); } } scan.close(); for(int i = lc; i <= dc; i++) { dos.writeInt(0); } for(int i = dc+1; i<100; i++){ dos.writeInt(symbolTable.searchLocation(i).getValue()); if(i!=99) dos.writeInt(0); } dos.close(); os.close(); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } public int getOpcode(String inst){ int toreturn = -1; if(isInstruction(inst)){ if(inst.equals("STOP")) toreturn=0; if(inst.equals("LD")) toreturn=1; if(inst.equals("STO")) toreturn=2; if(inst.equals("ADD")) toreturn=3; if(inst.equals("SUB")) toreturn=4; if(inst.equals("MPY")) toreturn=5; if(inst.equals("DIV")) toreturn=6; if(inst.equals("IN")) toreturn=7; if(inst.equals("OUT")) toreturn=8; if(inst.equals("B")) toreturn=9; if(inst.equals("BGTR")) toreturn=10; if(inst.equals("BZ")) toreturn=11; return toreturn; } else { return -1; } } public boolean isInstruction(String totest){ boolean toreturn = false; String[] labels = {"IN", "LD", "SUB", "BGTR", "BZ", "OUT", "B", "STO", "STOP", "AD\ D", "MTY", "DIV"}; for(int i = 0; i < 12; i++){ if(totest.equals(labels[i])) toreturn = true; } return toreturn; } } And here is my class Computer: import java.io.*; import java.util.NoSuchElementException; import java.util.Scanner; class Computer{ private Cpu cpu; private Input in; private OutPut out; private Memory mem; public Computer() throws IOException { Memory mem = new Memory(100); Input in = new Input(); OutPut out = new OutPut(); Cpu cpu = new Cpu(); System.out.println(in.getInt()); } public void run() throws IOException { cpu.reset(); cpu.setMDR(mem.read(cpu.getMAR())); cpu.fetch2(); while (!cpu.stop()) { cpu.decode(); if (cpu.OutFlag()) OutPut.display(mem.read(cpu.getMAR())); if (cpu.InFlag()) mem.write(cpu.getMDR(),in.getInt()); if (cpu.StoreFlag()) { mem.write(cpu.getMAR(),in.getInt()); cpu.getMDR(); } else { cpu.setMDR(mem.read(cpu.getMAR())); cpu.execute(); cpu.fetch(); cpu.setMDR(mem.read(cpu.getMAR())); cpu.fetch2(); } } } public void load() { mem.loadMemory(); } } Here is my Memory class: import java.io.*; import java.util.NoSuchElementException; import java.util.Scanner; class Memory{ private MemEl[] memArray; private int size; private int[] mem; public Memory(int s) {size = s; memArray = new MemEl[s]; for(int i = 0; i < s; i++) memArray[i] = new MemEl(); } public void write (int loc,int val) {if (loc >=0 && loc < size) memArray[loc].write(val); else System.out.println("Index Not in Domain"); } public int read (int loc) {return memArray[loc].read(); } public void dump() { for(int i = 0; i < size; i++) if(i%1 == 0) System.out.println(memArray[i].read()); else System.out.print(memArray[i].read()); } public void writeTo(int location, int value) { mem[location] = value; } public int readFrom(int location) { return mem[location]; } public int size() { return mem.length; } public void loadMemory() { this.write(0, 799); this.write(1, 798); this.write(2, 198); this.write(3, 499); this.write(4, 1008); this.write(5, 1108); this.write(6, 899); this.write(7, 909); this.write(8, 898); this.write(9, 0000); } public void loadFromFile(String filename){ try { FileReader fr = new FileReader(filename); BufferedReader br = new BufferedReader(fr); String read=null; int towrite=0; int l=0; do{ try{ read=br.readLine(); towrite = Integer.parseInt(read); }catch(Exception e){ } this.write(l, towrite); l++; }while(l<100); }catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } } Here is my Test class: public class Test{ public static void main(String[] args) throws java.io.IOException { Compiler compiler = new Compiler(); compiler.compile("program.txt"); } }

    Read the article

  • Recover strategy single bad sector in moricon

    - by Damon
    This week, my harddisk made me an early christmas present in the form of a single defect sector. To make up for the puny size of the present, it chose a sector inside moricons.dll for that. This means that now the system takes about 5 minutes to boot before Windows gives up and moves on, and there's 2 dozen scary "critical failure" entries in the system log after every boot, which is annoying. OK, admittedly, I shouldn't complain, it could be worse, the bad sector could be in ntldr... SMART info more or less indicates (for what SMART can indicate anyway) that the drive is mostly OK. Soft Read Error Rate has a score of 96, and Current Pending Sector Count has a raw value of 8, which translates to a score of 100. Acronis DriveMonitor makes this an issue (lowering the overall rating to 75%), HDD Health calls it "excellent", giving an overall rating of 95% (which is what this harddisk from day one). No single score is below 95 (power on hours and spin up count), and most are 100 anyway. Well, whatever, I've seen drives with perfect SMART values fail from one second to the other, and drives with moderate values work for years. So, I'm inclined not to put too much weight into that overall. TL;DR Now... to the problem: I don't feel like trashing the disk just yet (that's planned with a new OS install upgrading to Win7 early next year, independently of this issue), but in the mean time, I would still like to have a smoothly running system again. Therefore, I feel tempted to tamper with it, but before I render my system entirely unusable (since I've never done this before), I'd like to verify that my planned procedere is likely to suceed in having a working system again: Copy moricons.dl_ from the Windows install disk, rename it to moricons.zip, and unzip it. This gives an intact 5.1.2600.2180 version (the broken one is 5.1.2600.5512 - but I guess this makes not much of a difference, since it's an icon-only DLL, and an outdated copy should work better than one that can't be read) Run chkdsk /r /f` which will "repair" the file (i.e. delete the file without asking, tell the drive to remap the sector, and toss some unreadable junk into a file with a hexadecimal number) Hopefully Windows still boots after this (is that a reasonable expectation, or do I need to have something like BartPE ready? -- but then again, what's that good for in case chkdsk has nuked the entire file system...) Delete the junk file generated by chkdsk, copy the new DLL to %windir%\system32 Reboot. Pray. Maybe I just shouldn't touch anything, since it still kind of works... if annoying, but it works. Unsure... But, is there anything fundamentally wrong with the planned approach? Is this a sensible approach at all?

    Read the article

  • Recovering data from failed Raid configuration with 4 drives and two raid sets (Asus P6T / Intel ICH10r)

    - by user56365
    I've added the complete detailed version for my question below for those who can help, but want to quickly summarize my question first. I setup two Raid arrays using (4) WD Raptors, a striped set for the OS and 1+0 set for crucial data. After booting once out of the 50 times a cable fell out, the drive wasn't recognized in the array anymore. After trying to fix it, another drive did the same. I now have two drives remaining, luckily with the parity information. I know the striped set is gone, but I need the data on the other set. Can anyone recommend anything to recover the data, or fix the two drives that doesn't allow the raid controller to recognize the drives, even though they are listed on the utility screen as still apart of the configuration but that they are not found? More Details I recently upgraded to a ASUS P6T motherboard with an Intel ICH10R raid controller and changed my previous 4 drive raid array from strictly a Raid 1+0 set to a Raid 0 for the OS/Page/Scratch drive and a Raid 1+0 set for crucial data. I never had problems after upgrading with my configuration, even when a drive died and was replaced. I managed to rebuild the array fine. Unfortunately this time around, a cable came unattached and I booted my system up until the raid status screen with the degraded error. This shouldn't have been a problem, but after I attached the drive it was no longer recognized as a member in the array. Both drives actually show up as a non-member disk. I've spent a very, very long time online trying to find information or support and haven't had much luck. After spending time trying to scan the drive for errors, damaged partition info, etc.. another drive in the set decided it didn't want to be recognized as a part of the array. At this point, I have two out of the four drives still functioning, but the Raid 1+0 array went from degraded to failed and I must find a way to retrieve that data. I think the two drives still in the array have the parity information because they show up as OS (110GB),BACKUP(80GB) and OS:1(110GB),BACKUP(80GB) under windows data management. The other two are simply 74gb Raw unallocated Is it possible recover the data using those two drive only, and which tool would I use? Could it be a simple partition table or any other error that is repairable with hard drive utilities out there? I know the Raid 0 set is done for, but I would assume because the correct drives failed in a 1+0 config to save the data I can retrieve it some how.

    Read the article

  • can someone help me fix my code?

    - by user267490
    Hi, I have this code I been working on but I'm having a hard time for it to work. I did one but it only works in php 5.3 and I realized my host only supports php 5.0! do I was trying to see if I could get it to work on my sever correctly, I'm just lost and tired lol <?php //Temporarily turn on error reporting @ini_set('display_errors', 1); error_reporting(E_ALL); // Set default timezone (New PHP versions complain without this!) date_default_timezone_set("GMT"); // Common set_time_limit(0); require_once('dbc.php'); require_once('sessions.php'); page_protect(); // Image settings define('IMG_FIELD_NAME', 'cons_image'); // Max upload size in bytes (for form) define ('MAX_SIZE_IN_BYTES', '512000'); // Width and height for the thumbnail define ('THUMB_WIDTH', '150'); define ('THUMB_HEIGHT', '150'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <title>whatever</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <style type="text\css"> .validationerrorText { color:red; font-size:85%; font-weight:bold; } </style> </head> <body> <h1>Change image</h1> <?php $errors = array(); // Process form if (isset($_POST['submit'])) { // Get filename $filename = stripslashes($_FILES['cons_image']['name']); // Validation of image file upload $allowedFileTypes = array('image/gif', 'image/jpg', 'image/jpeg', 'image/png'); if ($_FILES[IMG_FIELD_NAME]['error'] == UPLOAD_ERR_NO_FILE) { $errors['img_empty'] = true; } elseif (($_FILES[IMG_FIELD_NAME]['type'] != '') && (!in_array($_FILES[IMG_FIELD_NAME]['type'], $allowedFileTypes))) { $errors['img_type'] = true; } elseif (($_FILES[IMG_FIELD_NAME]['error'] == UPLOAD_ERR_INI_SIZE) || ($_FILES[IMG_FIELD_NAME]['error'] == UPLOAD_ERR_FORM_SIZE) || ($_FILES[IMG_FIELD_NAME]['size'] > MAX_SIZE_IN_BYTES)) { $errors['img_size'] = true; } elseif ($_FILES[IMG_FIELD_NAME]['error'] != UPLOAD_ERR_OK) { $errors['img_error'] = true; } elseif (strlen($_FILES[IMG_FIELD_NAME]['name']) > 200) { $errors['img_nametoolong'] = true; } elseif ( (file_exists("\\uploads\\{$username}\\images\\banner\\{$filename}")) || (file_exists("\\uploads\\{$username}\\images\\banner\\thumbs\\{$filename}")) ) { $errors['img_fileexists'] = true; } if (! empty($errors)) { unlink($_FILES[IMG_FIELD_NAME]['tmp_name']); //cleanup: delete temp file } // Create thumbnail if (empty($errors)) { // Make directory if it doesn't exist if (!is_dir("\\uploads\\{$username}\\images\\banner\\thumbs\\")) { // Take directory and break it down into folders $dir = "uploads\\{$username}\\images\\banner\\thumbs"; $folders = explode("\\", $dir); // Create directory, adding folders as necessary as we go (ignore mkdir() errors, we'll check existance of full dir in a sec) $dirTmp = ''; foreach ($folders as $fldr) { if ($dirTmp != '') { $dirTmp .= "\\"; } $dirTmp .= $fldr; mkdir("\\".$dirTmp); //ignoring errors deliberately! } // Check again whether it exists if (!is_dir("\\uploads\\$username\\images\\banner\\thumbs\\")) { $errors['move_source'] = true; unlink($_FILES[IMG_FIELD_NAME]['tmp_name']); //cleanup: delete temp file } } if (empty($errors)) { // Move uploaded file to final destination if (! move_uploaded_file($_FILES[IMG_FIELD_NAME]['tmp_name'], "/uploads/$username/images/banner/$filename")) { $errors['move_source'] = true; unlink($_FILES[IMG_FIELD_NAME]['tmp_name']); //cleanup: delete temp file } else { // Create thumbnail in new dir if (! make_thumb("/uploads/$username/images/banner/$filename", "/uploads/$username/images/banner/thumbs/$filename")) { $errors['thumb'] = true; unlink("/uploads/$username/images/banner/$filename"); //cleanup: delete source file } } } } // Record in database if (empty($errors)) { // Find existing record and delete existing images $sql = "SELECT `bannerORIGINAL`, `bannerTHUMB` FROM `agent_settings` WHERE (`agent_id`={$user_id}) LIMIT 1"; $result = mysql_query($sql); if (!$result) { unlink("/uploads/$username/images/banner/$filename"); //cleanup: delete source file unlink("/uploads/$username/images/banner/thumbs/$filename"); //cleanup: delete thumbnail file die("<div><b>Error: Problem occurred with Database Query!</b><br /><br /><b>File:</b> " . __FILE__ . "<br /><b>Line:</b> " . __LINE__ . "<br /><b>MySQL Error Num:</b> " . mysql_errno() . "<br /><b>MySQL Error:</b> " . mysql_error() . "</div>"); } $numResults = mysql_num_rows($result); if ($numResults == 1) { $row = mysql_fetch_assoc($result); // Delete old files unlink("/uploads/$username/images/banner/" . $row['bannerORIGINAL']); //delete OLD source file unlink("/uploads/$username/images/banner/thumbs/" . $row['bannerTHUMB']); //delete OLD thumbnail file } // Update/create record with new images if ($numResults == 1) { $sql = "INSERT INTO `agent_settings` (`agent_id`, `bannerORIGINAL`, `bannerTHUMB`) VALUES ({$user_id}, '/uploads/$username/images/banner/$filename', '/uploads/$username/images/banner/thumbs/$filename')"; } else { $sql = "UPDATE `agent_settings` SET `bannerORIGINAL`='/uploads/$username/images/banner/$filename', `bannerTHUMB`='/uploads/$username/images/banner/thumbs/$filename' WHERE (`agent_id`={$user_id})"; } $result = mysql_query($sql); if (!$result) { unlink("/uploads/$username/images/banner/$filename"); //cleanup: delete source file unlink("/uploads/$username/images/banner/thumbs/$filename"); //cleanup: delete thumbnail file die("<div><b>Error: Problem occurred with Database Query!</b><br /><br /><b>File:</b> " . __FILE__ . "<br /><b>Line:</b> " . __LINE__ . "<br /><b>MySQL Error Num:</b> " . mysql_errno() . "<br /><b>MySQL Error:</b> " . mysql_error() . "</div>"); } } // Print success message and how the thumbnail image created if (empty($errors)) { echo "<p>Thumbnail created Successfully!</p>\n"; echo "<img src=\"/uploads/$username/images/banner/thumbs/$filename\" alt=\"New image thumbnail\" />\n"; echo "<br />\n"; } } if (isset($errors['move_source'])) { echo "\t\t<div>Error: Failure occurred moving uploaded source image!</div>\n"; } if (isset($errors['thumb'])) { echo "\t\t<div>Error: Failure occurred creating thumbnail!</div>\n"; } ?> <form action="" enctype="multipart/form-data" method="post"> <input type="hidden" name="MAX_FILE_SIZE" value="<?php echo MAX_SIZE_IN_BYTES; ?>" /> <label for="<?php echo IMG_FIELD_NAME; ?>">Image:</label> <input type="file" name="<?php echo IMG_FIELD_NAME; ?>" id="<?php echo IMG_FIELD_NAME; ?>" /> <?php if (isset($errors['img_empty'])) { echo "\t\t<div class=\"validationerrorText\">Required!</div>\n"; } if (isset($errors['img_type'])) { echo "\t\t<div class=\"validationerrorText\">File type not allowed! GIF/JPEG/PNG only!</div>\n"; } if (isset($errors['img_size'])) { echo "\t\t<div class=\"validationerrorText\">File size too large! Maximum size should be " . MAX_SIZE_IN_BYTES . "bytes!</div>\n"; } if (isset($errors['img_error'])) { echo "\t\t<div class=\"validationerrorText\">File upload error occured! Error code: {$_FILES[IMG_FIELD_NAME]['error']}</div>\n"; } if (isset($errors['img_nametoolong'])) { echo "\t\t<div class=\"validationerrorText\">Filename too long! 200 Chars max!</div>\n"; } if (isset($errors['img_fileexists'])) { echo "\t\t<div class=\"validationerrorText\">An image file already exists with that name!</div>\n"; } ?> <br /><input type="submit" name="submit" id="image1" value="Upload image" /> </form> </body> </html> <?php ################################# # # F U N C T I O N S # ################################# /* * Function: make_thumb * * Creates the thumbnail image from the uploaded image * the resize will be done considering the width and * height defined, but without deforming the image * * @param $sourceFile Path anf filename of source image * @param $destFile Path and filename to save thumbnail as * @param $new_w the new width to use * @param $new_h the new height to use */ function make_thumb($sourceFile, $destFile, $new_w=false, $new_h=false) { if ($new_w === false) { $new_w = THUMB_WIDTH; } if ($new_h === false) { $new_h = THUMB_HEIGHT; } // Get image extension $ext = strtolower(getExtension($sourceFile)); // Copy source switch($ext) { case 'jpg': case 'jpeg': $src_img = imagecreatefromjpeg($sourceFile); break; case 'png': $src_img = imagecreatefrompng($sourceFile); break; case 'gif': $src_img = imagecreatefromgif($sourceFile); break; default: return false; } if (!$src_img) { return false; } // Get dimmensions of the source image $old_x = imageSX($src_img); $old_y = imageSY($src_img); // Calculate the new dimmensions for the thumbnail image // 1. calculate the ratio by dividing the old dimmensions with the new ones // 2. if the ratio for the width is higher, the width will remain the one define in WIDTH variable // and the height will be calculated so the image ratio will not change // 3. otherwise we will use the height ratio for the image // as a result, only one of the dimmensions will be from the fixed ones $ratio1 = $old_x / $new_w; $ratio2 = $old_y / $new_h; if ($ratio1 > $ratio2) { $thumb_w = $new_w; $thumb_h = $old_y / $ratio1; } else { $thumb_h = $new_h; $thumb_w = $old_x / $ratio2; } // Create a new image with the new dimmensions $dst_img = ImageCreateTrueColor($thumb_w, $thumb_h); // Resize the big image to the new created one imagecopyresampled($dst_img, $src_img, 0, 0, 0, 0, $thumb_w, $thumb_h, $old_x, $old_y); // Output the created image to the file. Now we will have the thumbnail into the file named by $filename switch($ext) { case 'jpg': case 'jpeg': $result = imagepng($dst_img, $destFile); break; case 'png': $result = imagegif($dst_img, $destFile); break; case 'gif': $result = imagejpeg($dst_img, $destFile); break; default: //should never occur! } if (!$result) { return false; } // Destroy source and destination images imagedestroy($dst_img); imagedestroy($src_img); return true; } /* * Function: getExtension * * Returns the file extension from a given filename/path * * @param $str the filename to get the extension from */ function getExtension($str) { return pathinfo($str, PATHINFO_EXTENSION); } ?>

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • vmdk to live cd - VMware vmxnet virtual NIC driver Kernel panic

    - by ronalchn
    Task I am trying to convert a virtual machine to a live CD. Specifically, the virtual machine I am trying to convert is the IOI 2013 Competition Environment. In this task, I am aided by a guide Converting a virtual disk image: VDI or VMDK to an ISO you can distribute. Symptoms However, after getting through all the instructions, the live CD causes a kernel panic on boot on bare metal. In particular, the screen shows: [0.737348] cdrom: Uniform CD-ROM driver Revision: 3.20 [0.737503] sr 3:0:0:0: >Attached scsi CD-ROM sr0 [0.737638] sr 3:0:0:0: >Attached scsi generic sg2 type 5 [0.737771] Freeing unused kernel memory: 756k freed [0.738093] Write protecting the kernel text: 5960k [0.738155] Write protecting the kernel read-only data: 2424k [0.738224] NX-protecting the kernel data: 4280k Loading, please wait... [0.752252] udevd[100]: starting version 175 [0.768708] VMware vmxnet3 virtual NIC driver - version 1.1.29.0-k-NAPI [0.781204] VMware PVSCSI driver - version 1.0.2.0-k [0.789555] VMware vmxnet virtual NIC driver [0.799356] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000200 [0.799356] [0.799472] Pid: 1, comm: init Tainted: G 0 3.5.0-17-generic #28-Ubuntu [0.799549] Call Trace: [0.799603] [<c15bf0ec>] panic+0x81/0x17b [0.799654] [<c104a6a5>] do_exit+0x745/0x7a0 [0.799707] [<c104a9a4>] do_group_exit+0x34/0xa0 [0.799760] [<c104aa28>] sys_exit_group+0x18/0x20 [0.799813] [<c15cff5f>] sysenter_do_call+0x12/0x28 Possible problem I suspect that the problem is the VMware vmxnet virtual NIC driver - however, I do not know how I can uninstall it, and possibly install one for a bare metal machine. If anyone knows which packages needs installing/uninstalling at the .rootfs/ chroot directory stage, please let me know. Details on procedure Do note that after importing the .ova file into Virtualbox, the virtual machine is stored as a .vmdk file already, and not a .vdi file. I would like to point out some results of the procedure followed in case of any questions. This is after extracting the filesystem from the .raw file to the .rootfs/ directory mentioned in the blog. I changed the filesystem table as mentioned in the blog, then looked at the possible "kernel optimized for virtualization". However, I found that linux-image-generic was already installed. Also, when running the command dpkg-query --showformat='${Package}\n' -W 'vmware-tools*' (or dpkg-query --showformat='${Package}\n' -W '*-virtual'), no packages were found. Thus, I did not find any virtualization specific packages. I proceeded to generate the iso following the steps in the blog, and burned it to a DVD.

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • Network speeds being report as 4x higher than actual in Windows 7 SP1

    - by Synetech
    Ever since installing Windows 7 SP1, I have noticed that all programs that display my network transfer rate have been exactly 4x higher than they actually are. For example, when I download something from a high-bandwidth web site or through torrents with lots of sources, the download rate indicated is is ~5MBps (~40Mbps) even though my Internet connection has a maximum of only 1.5MBps (12Mbps). It is the same situation with the upstream bandwidth: the connection maximum is 64KBps, but I’m seeing up to 256KBps. I have tried several different programs for monitoring bandwidth throughput and they all give the same results. I also tried different times and different days, and they always show the rate as being four times too high. My initial thought was that my ISP had increased the speeds (without my noticing), which they have done before. However, I checked my ISP’s site and they have not increased the speeds. Moreover, when I look at the speeds in the program actually doing the transfer (eg Chrome, µTorrent, etc.), the numbers are in line with the expected values at the same time that bandwidth monitoring programs are showing the high numbers. The only significant change (and pretty much the only change at all) that has occurred to my system since the change was the installation of SP1 for Windows 7. As such, it is my belief that some sort of change exists in SP1 whereby software that accesses the bandwidth via a specific API receives (erroneously?) high numbers while others that have access to the raw data continue to receive the correct values. I booted into Windows XP and downloaded some things via HTTP and torrent and in both cases, the numbers were as expected (like they were in Windows 7 before installing SP1). I then booted back into 7SP1 and once again, the numbers were four times higher than possible. Therefore it is definitely something in SP1 that has changed how local bandwidth is calculated/returned. There is definitely something wonky with Windows 7 SP1’s network speed calculation. I tried Googling this, but (for multiple reasons), have had a difficult time finding anything relevant. Has anybody else noticed this behavior? Does anybody know of any bugs or changes in SP1 that could account for it?

    Read the article

  • TrueCrypt partition will no longer mount

    - by sparkyuiop
    I am hoping for some advice to help me out of my situation, with luck. I have a computer running Windows 7 Ultimate x64 with 3 hard disks installed. On my 2TB hard disk 2 (non-system disk) I have 4 partitions. One is for music, another for video, a downloads partition and a 500GB RAW Truecrypt encrypted partition / volume that I had setup to mount with 4 photographs used as keyfiles. The 4 photographs are located in my 'Documents' partition which is one of four partitions on my 1.5TB hard disk 1 (non-system disk) When I setup the disk encryption I did not (I'm 99% sure) create a password, I only used the 4 photograph keyfiles to mount the volume. Recently my 1TB hard disk 0 (system / boot) started to fail so I decided to replace it. I was going to clone the old disk to a new disk but decided that a fresh installation would be more beneficial. Once I had transferred all the required 'User Data' from my old hard disk 0 (C: disk) I discarded it. I reinstalled Truecrypt, pointed to the partition, selected my 4 keyfiles photographs and I mounted my encrypted volume with no issues. In fact I mounted it several times after re-installing Windows and after reboots. Now all of a sudden when I try and mount it I get the message "incorrect keyfile(s) and/or password or not a Truecrypt volume". Now I am not sure why this happened as I do not recall exactly what I did between last mounting the volume successfully and it not mounting. Here are some of the possible things I may have done to cause it to stop working but I am at a loss as to where to start to try and resolve the problem. 1. I had swapped the drive letters to a preferred order. 2. I possibly swapped the physical SATA connectors on the mainboard. 3. I enabled 'Hot Plugging' for the two non-system hard disk SATA ports and the DVD SATA port in the BIOS. I have tried changing the encrypted partition drive letter as suggested in another post but this does not help. On my old system the encrypted drive was drive "X". I have about tried it with all the other free drive letters but alas nothing changes. I do not recall what drive letter was allocated to the encrypted partition before I changed them all. I have not tried to change the letter back to what it possibly was to start with as I am happy with the current layout. I will try this is anyone thinks it would be worthwhile though. I do hope I have managed to convey my situation in an understandable manner and live in hope someone could help me recover years of personal files. Thank you very much for taking the time to read my post and for any suggestions you may offer. Regards Phillip Thorne (UK) Anyone???

    Read the article

  • Has Javascript developed beyond what it was originally designed to do?

    - by Elliot Bonneville
    I've been talking with a friend about the purpose of Javascript, when and how it should be used, etc. He quoted that: JavaScript was designed to add interactivity to HTML pages [...] JavaScript gives HTML designers a programming tool HTML authors are normally not programmers, but JavaScript is a scripting language with a very simple syntax! Almost anyone can put small "snippets" of code into their HTML pages JavaScript can react to events A JavaScript can be set to execute when something happens, like when a page has finished loading or when a user clicks on an HTML element JavaScript can read and write HTML elements A JavaScript can read and change the content of an HTML element JavaScript can be used to validate data A JavaScript can be used to validate form data before it is submitted to a server. This saves the server from extra processing JavaScript can be used to detect the visitor's browser - A JavaScript can be used to detect the visitor's browser, and - depending on the browser - load another page specifically designed for that browser. JavaScript can be used to create cookies - A JavaScript can be used to store and retrieve information on the visitor's computer. However, it seems like Javascript's getting used to do a lot more than these days. My friend also advocates against using Javascript's OOP functionality, claiming that "you shouldn't be processing data, merely validating." Is Javascript really limited to validating data and making flashy graphics on a web page? He goes on to claim "you shouldn't be attempting to access databases through javascript" and also says " in general you don't want to be doing your heavy lifting in javascript". I can't say I agree with his opinion, but I'd like to get some more input on this. So, my question: Has Javascript evolved from the definition above to something more powerful, has the way we use it changed, or am I just plain wrong? While I realize this is a subjective question, I can't find any more information on it, so a few links would be good, if nothing else. I'm not looking for a debate, just an answer.

    Read the article

  • Best Practice for captcha based protection against D.O.S to Nginx Proxy

    - by user325320
    The idea is explained here In simple words, Nginx Proxy plays the role of load balance and transmits the HTTP/HTTPS requests to servers. If the number of request times within a certain period from an individual IP exceeds a threshold, it will trigger a captcha for the upcoming requests. And the end-user must input the correct captcha code before he can continue to access the site. Do you know any open source / free NGINX module for this usage? I searched on the Internet and here is one of them: https://github.com/snbuback/nginx seems it needs modification. Any suggestion / experience is welcome, thank you

    Read the article

  • Cannot set ILMerge path to console

    - by KMC
    I install ILMerge.exe; path is C:\Program Files\Microsoft\ILMerge\ILMerge.exe. I want to set the command to the Path so I can type ilmerge anywhere to use the application. I googled and tried all the following but none works: setx -m ilmerge;C:\Program Files\Microsoft\ILMerge\ILMerge.exe setx -M ilmerge;C:\Program Files\Microsoft\ILMerge\ILMerge.exe setx -m %PATH%;C:\Program Files\Microsoft\ILMerge\ILMerge.exe setx /S system /U administrator ilmerge;C:\Program Files\Microsoft\ILMerge\ILMerge.exe I then tried to add, the GUI's Environment Variables, click New and input the variable "ilmerge" and value "C:\Program Files\Microsoft\ILMerge\ILMerge.exe". But in command prompt and type in ilmerge, still gives me 'ilmerge' is not recognized as an internal or external command, operable program or bath file. Why something as basic as setting a path that confusing..

    Read the article

  • iptables is not allowing me to contact my dns nameservers

    - by user1272737
    I have the follwing iptables rules: Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- localhost.localdomain anywhere tcp dpt:mysql ACCEPT tcp -- anywhere anywhere tcp dpt:14443 ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ACCEPT tcp -- anywhere anywhere tcp dpt:ftp-data ACCEPT tcp -- anywhere anywhere tcp dpt:xxxxxxx Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination When I turn off iptables I am able to use wget and all other commands. When these rules are enabled I cannot connect to any address. Any idea why this would be?

    Read the article

  • Netflix, jQuery, JSONP, and OData

    - by Stephen Walther
    At the last MIX conference, Netflix announced that they are exposing their catalog of movie information using the OData protocol. This is great news! This means that you can take advantage of all of the advanced OData querying features against a live database of Netflix movies. In this blog entry, I’ll demonstrate how you can use Netflix, jQuery, JSONP, and OData to create a simple movie lookup form. The form enables you to enter a movie title, or part of a movie title, and display a list of matching movies. For example, Figure 1 illustrates the movies displayed when you enter the value robot into the lookup form.   Using the Netflix OData Catalog API You can learn about the Netflix OData Catalog API at the following website: http://developer.netflix.com/docs/oData_Catalog The nice thing about this website is that it provides plenty of samples. It also has a good general reference for OData. For example, the website includes a list of OData filter operators and functions. The Netflix Catalog API exposes 4 top-level resources: Titles – A database of Movie information including interesting movie properties such as synopsis, BoxArt, and Cast. People – A database of people information including interesting information such as Awards, TitlesDirected, and TitlesActedIn. Languages – Enables you to get title information in different languages. Genres – Enables you to get title information for specific movie genres. OData is REST based. This means that you can perform queries by putting together the right URL. For example, if you want to get a list of the movies that were released after 2010 and that had an average rating greater than 4 then you can enter the following URL in the address bar of your browser: http://odata.netflix.com/Catalog/Titles?$filter=ReleaseYear gt 2010&AverageRating gt 4 Entering this URL returns the movies in Figure 2. Creating the Movie Lookup Form The complete code for the Movie Lookup form is contained in Listing 1. Listing 1 – MovieLookup.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Netflix with jQuery</title> <style type="text/css"> #movieTemplateContainer div { width:400px; padding: 10px; margin: 10px; border: black solid 1px; } </style> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> </head> <body> <label>Search Movies:</label> <input id="movieName" size="50" /> <button id="btnLookup">Lookup</button> <div id="movieTemplateContainer"></div> <script id="movieTemplate" type="text/html"> <div> <img src="<%=BoxArtSmallUrl %>" /> <strong><%=Name%></strong> <p> <%=Synopsis %> </p> </div> </script> <script type="text/javascript"> $("#btnLookup").click(function () { // Build OData query var movieName = $("#movieName").val(); var query = "http://odata.netflix.com/Catalog" // netflix base url + "/Titles" // top-level resource + "?$filter=substringof('" + escape(movieName) + "',Name)" // filter by movie name + "&$callback=callback" // jsonp request + "&$format=json"; // json request // Make JSONP call to Netflix $.ajax({ dataType: "jsonp", url: query, jsonpCallback: "callback", success: callback }); }); function callback(result) { // unwrap result var movies = result["d"]["results"]; // show movies in template var showMovie = tmpl("movieTemplate"); var html = ""; for (var i = 0; i < movies.length; i++) { // flatten movie movies[i].BoxArtSmallUrl = movies[i].BoxArt.SmallUrl; // render with template html += showMovie(movies[i]); } $("#movieTemplateContainer").html(html); } </script> </body> </html> The HTML page in Listing 1 includes two JavaScript libraries: <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> The first script tag retrieves jQuery from the Microsoft Ajax CDN. You can learn more about the Microsoft Ajax CDN by visiting the following website: http://www.asp.net/ajaxLibrary/cdn.ashx The second script tag is used to reference Resig’s micro-templating library. Because I want to use a template to display each movie, I need this library: http://ejohn.org/blog/javascript-micro-templating/ When you enter a value into the Search Movies input field and click the button, the following JavaScript code is executed: // Build OData query var movieName = $("#movieName").val(); var query = "http://odata.netflix.com/Catalog" // netflix base url + "/Titles" // top-level resource + "?$filter=substringof('" + escape(movieName) + "',Name)" // filter by movie name + "&$callback=callback" // jsonp request + "&$format=json"; // json request // Make JSONP call to Netflix $.ajax({ dataType: "jsonp", url: query, jsonpCallback: "callback", success: callback }); This code Is used to build a query that will be executed against the Netflix Catalog API. For example, if you enter the search phrase King Kong then the following URL is created: http://odata.netflix.com/Catalog/Titles?$filter=substringof(‘King%20Kong’,Name)&$callback=callback&$format=json This query includes the following parameters: $filter – You assign a filter expression to this parameter to filter the movie results. $callback – You assign the name of a JavaScript callback method to this parameter. OData calls this method to return the movie results. $format – you assign either the value json or xml to this parameter to specify how the format of the movie results. Notice that all of the OData parameters -- $filter, $callback, $format -- start with a dollar sign $. The Movie Lookup form uses JSONP to retrieve data across the Internet. Because WCF Data Services supports JSONP, and Netflix uses WCF Data Services to expose movies using the OData protocol, you can use JSONP when interacting with the Netflix Catalog API. To learn more about using JSONP with OData, see Pablo Castro’s blog: http://blogs.msdn.com/pablo/archive/2009/02/25/adding-support-for-jsonp-and-url-controlled-format-to-ado-net-data-services.aspx The actual JSONP call is performed by calling the $.ajax() method. When this call successfully completes, the JavaScript callback() method is called. The callback() method looks like this: function callback(result) { // unwrap result var movies = result["d"]["results"]; // show movies in template var showMovie = tmpl("movieTemplate"); var html = ""; for (var i = 0; i < movies.length; i++) { // flatten movie movies[i].BoxArtSmallUrl = movies[i].BoxArt.SmallUrl; // render with template html += showMovie(movies[i]); } $("#movieTemplateContainer").html(html); } The movie results from Netflix are passed to the callback method. The callback method takes advantage of Resig’s micro-templating library to display each of the movie results. A template used to display each movie is passed to the tmpl() method. The movie template looks like this: <script id="movieTemplate" type="text/html"> <div> <img src="<%=BoxArtSmallUrl %>" /> <strong><%=Name%></strong> <p> <%=Synopsis %> </p> </div> </script>   This template looks like a server-side ASP.NET template. However, the template is rendered in the client (browser) instead of the server. Summary The goal of this blog entry was to demonstrate how well jQuery works with OData. We managed to use a number of interesting open-source libraries and open protocols while building the Movie Lookup form including jQuery, JSONP, JSON, and OData.

    Read the article

  • PPTP server stuck at "GRE: Bad checksum from pppd"

    - by user92516
    I am a network engineer having quite limited experience with Ubuntu. I have been following up these online instructions to set up a pptp server but without much luck to get it to work. My server is a vm running an Apple Xserve behind a Cisco firewall. I made sure tcp 1723 and GRE are opened for the box. Below is the syslog output, looks like I always got stuck at GRE: Bad checksum from pppd. I'm running Ubuntu 10.04. Sep 24 13:21:53 ubuntu pptpd[1231]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Sep 24 13:21:53 ubuntu pptpd[1231]: CTRL: Reaping child PPP[1232] Sep 24 13:21:53 ubuntu pptpd[1231]: CTRL: Client 166.137.85.165 control connection finished Sep 24 13:22:41 ubuntu pptpd[1276]: MGR: connections limit (100) reached, extra IP addresses ignored Sep 24 13:22:41 ubuntu pptpd[1277]: MGR: Manager process started Sep 24 13:22:41 ubuntu pptpd[1277]: MGR: Maximum of 100 connections available Sep 24 13:22:50 ubuntu pptpd[1278]: CTRL: Client 166.137.85.165 control connection started Sep 24 13:22:51 ubuntu pptpd[1278]: CTRL: Starting call (launching pppd, opening GRE) Sep 24 13:22:51 ubuntu pppd[1279]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Sep 24 13:22:51 ubuntu pppd[1279]: pppd 2.4.5 started by root, uid 0 Sep 24 13:22:51 ubuntu pppd[1279]: Using interface ppp0 Sep 24 13:22:51 ubuntu pppd[1279]: Connect: ppp0 <--> /dev/pts/1 Sep 24 13:22:51 ubuntu pptpd[1278]: GRE: Bad checksum from pppd. Sep 24 13:23:21 ubuntu pppd[1279]: LCP: timeout sending Config-Requests Sep 24 13:23:21 ubuntu pppd[1279]: Connection terminated. Sep 24 13:23:21 ubuntu pppd[1279]: Modem hangup Sep 24 13:23:21 ubuntu pppd[1279]: Exit. Sep 24 13:23:21 ubuntu pptpd[1278]: GRE: read(fd=6,buffer=805a540,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Sep 24 13:23:21 ubuntu pptpd[1278]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Sep 24 13:23:21 ubuntu pptpd[1278]: CTRL: Reaping child PPP[1279] Sep 24 13:23:21 ubuntu pptpd[1278]: CTRL: Client 166.137.85.165 control connection finished

    Read the article

  • Mapping UrlEncoded POST Values in ASP.NET Web API

    - by Rick Strahl
    If there's one thing that's a bit unexpected in ASP.NET Web API, it's the limited support for mapping url encoded POST data values to simple parameters of ApiController methods. When I first looked at this I thought I was doing something wrong, because it seems mighty odd that you can bind query string values to parameters by name, but can't bind POST values to parameters in the same way. To demonstrate here's a simple example. If you have a Web API method like this:[HttpGet] public HttpResponseMessage Authenticate(string username, string password) { …} and then hit with a URL like this: http://localhost:88/samples/authenticate?Username=ricks&Password=sekrit it works just fine. The query string values are mapped to the username and password parameters of our API method. But if you now change the method to work with [HttpPost] instead like this:[HttpPost] public HttpResponseMessage Authenticate(string username, string password) { …} and hit it with a POST HTTP Request like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Content-type: application/x-www-form-urlencoded Content-Length: 30 Username=ricks&Password=sekrit you'll find that while the request works, it doesn't actually receive the two string parameters. The username and password parameters are null and so the method is definitely going to fail. When I mentioned this over Twitter a few days ago I got a lot of responses back of why I'd want to do this in the first place - after all HTML Form submissions are the domain of MVC and not WebAPI which is a valid point. However, the more common use case is using POST Variables with AJAX calls. The following is quite common for passing simple values:$.post(url,{ Username: "Rick", Password: "sekrit" },function(result) {…}); but alas that doesn't work. How ASP.NET Web API handles Content Bodies Web API supports parsing content data in a variety of ways, but it does not deal with multiple posted content values. In effect you can only post a single content value to a Web API Action method. That one parameter can be very complex and you can bind it in a variety of ways, but ultimately you're tied to a single POST content value in your parameter definition. While it's possible to support multiple parameters on a POST/PUT operation, only one parameter can be mapped to the actual content - the rest have to be mapped to route values or the query string. Web API treats the whole request body as one big chunk of data that is sent to a Media Type Formatter that's responsible for de-serializing the content into whatever value the method requires. The restriction comes from async nature of Web API where the request data is read only once inside of the formatter that retrieves and deserializes it. Because it's read once, checking for content (like individual POST variables) first is not possible. However, Web API does provide a couple of ways to access the form POST data: Model Binding - object property mapping to bind POST values FormDataCollection - collection of POST keys/values ModelBinding POST Values - Binding POST data to Object Properties The recommended way to handle POST values in Web API is to use Model Binding, which maps individual urlencoded POST values to properties of a model object provided as the parameter. Model binding requires a single object as input to be bound to the POST data, with each POST key that matches a property name (including nested properties like Address.Street) being mapped and updated including automatic type conversion of simple types. This is a very nice feature - and a familiar one from MVC - that makes it very easy to have model objects mapped directly from inbound data. The obvious drawback with Model Binding is that you need a model for it to work: You have to provide a strongly typed object that can receive the data and this object has to map the inbound data. To rewrite the example above to use ModelBinding I have to create a class maps the properties that I need as parameters:public class LoginData { public string Username { get; set; } public string Password { get; set; } } and then accept the data like this in the API method:[HttpPost] public HttpResponseMessage Authenticate(LoginData login) { string username = login.Username; string password = login.Password; … } This works fine mapping the POST values to the properties of the login object. As a side benefit of this method definition, the method now also allows posting of JSON or XML to the same endpoint. If I change my request to send JSON like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: application/jsonContent-type: application/json Content-Length: 40 {"Username":"ricks","Password":"sekrit"} it works as well and transparently, courtesy of the nice Content Negotiation features of Web API. There's nothing wrong with using Model binding and in fact it's a common practice to use (view) model object for inputs coming back from the client and mapping them into these models. But it can be  kind of a hassle if you have AJAX applications with a ton of backend hits, especially if many methods are very atomic and focused and don't effectively require a model or view. Not always do you have to pass structured data, but sometimes there are just a couple of simple response values that need to be sent back. If all you need is to pass a couple operational parameters, creating a view model object just for parameter purposes seems like overkill. Maybe you can use the query string instead (if that makes sense), but if you can't then you can often end up with a plethora of 'message objects' that serve no further  purpose than to make Model Binding work. Note that you can accept multiple parameters with ModelBinding so the following would still work:[HttpPost] public HttpResponseMessage Authenticate(LoginData login, string loginDomain) but only the object will be bound to POST data. As long as loginDomain comes from the querystring or route data this will work. Collecting POST values with FormDataCollection Another more dynamic approach to handle POST values is to collect POST data into a FormDataCollection. FormDataCollection is a very basic key/value collection (like FormCollection in MVC and Request.Form in ASP.NET in general) and then read the values out individually by querying each. [HttpPost] public HttpResponseMessage Authenticate(FormDataCollection form) { var username = form.Get("Username"); var password = form.Get("Password"); …} The downside to this approach is that it's not strongly typed, you have to handle type conversions on non-string parameters, and it gets a bit more complicated to test such as setup as you have to seed a FormDataCollection with data. On the other hand it's flexible and easy to use and especially with string parameters is easy to deal with. It's also dynamic, so if the client sends you a variety of combinations of values on which you make operating decisions, this is much easier to work with than a strongly typed object that would have to account for all possible values up front. The downside is that the code looks old school and isn't as self-documenting as a parameter list or object parameter would be. Nevertheless it's totally functionality and a viable choice for collecting POST values. What about [FromBody]? Web API also has a [FromBody] attribute that can be assigned to parameters. If you have multiple parameters on a Web API method signature you can use [FromBody] to specify which one will be parsed from the POST content. Unfortunately it's not terribly useful as it only returns content in raw format and requires a totally non-standard format ("=content") to specify your content. For more info in how FromBody works and several related issues to how POST data is mapped, you can check out Mike Stalls post: How WebAPI does Parameter Binding Not really sure where the Web API team thought [FromBody] would really be a good fit other than a down and dirty way to send a full string buffer. Extending Web API to make multiple POST Vars work? Don't think so Clearly there's no native support for multiple POST variables being mapped to parameters, which is a bit of a bummer. I know in my own work on one project my customer actually found this to be a real sticking point in their AJAX backend work, and we ended up not using Web API and using MVC JSON features instead. That's kind of sad because Web API is supposed to be the proper solution for AJAX backends. With all of ASP.NET Web API's extensibility you'd think there would be some way to build this functionality on our own, but after spending a bit of time digging and asking some of the experts from the team and Web API community I didn't hear anything that even suggests that this is possible. From what I could find I'd say it's not possible primarily because Web API's Routing engine does not account for the POST variable mapping. This means [HttpPost] methods with url encoded POST buffers are not mapped to the parameters of the endpoint, and so the routes would never even trigger a request that could be intercepted. Once the routing doesn't work there's not much that can be done. If somebody has an idea how this could be accomplished I would love to hear about it. Do we really need multi-value POST mapping? I think that that POST value mapping is a feature that one would expect of any API tool to have. If you look at common APIs out there like Flicker and Google Maps etc. they all work with POST data. POST data is very prominent much more so than JSON inputs and so supporting as many options that enable would seem to be crucial. All that aside, Web API does provide very nice features with Model Binding that allows you to capture many POST variables easily enough, and logistically this will let you build whatever you need with POST data of all shapes as long as you map objects. But having to have an object for every operation that receives a data input is going to take its toll in heavy AJAX applications, with a lot of types created that do nothing more than act as parameter containers. I also think that POST variable mapping is an expected behavior and Web APIs non-support will likely result in many, many questions like this one: How do I bind a simple POST value in ASP.NET WebAPI RC? with no clear answer to this question. I hope for V.next of WebAPI Microsoft will consider this a feature that's worth adding. Related Articles Passing multiple POST parameters to Web API Controller Methods Mike Stall's post: How Web API does Parameter Binding Where does ASP.NET Web API Fit?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • External USB drive is failing

    - by dma_k
    I have an external USB 2.0 drive WD My Book Mirror Edition, running in RAID 1 (mirroring) mode. A while ago the hard drive started to fail: it stops responding (directories are not listed returning an error after a big timeout). Sometimes it works for weeks before a failure, sometimes – few hours. Small write operations (like removing few files or editing a small file) do not harm, but when copying large files to the drive over the network, or creating the archive locally, the kernel dumps. Also interesting to note that once kernel has failed, Linux does not want to reboot normally (reboot hangs); when Linux box is shutdown with power button, WD drive does not go to sleep mode (as it usually does): leds continue to run, pressing and holding the "shutdown" button on drive's back panel does not do anything; only unplugging the power cord helps. Here goes the boot log: Aug 16 00:32:21 kernel: [ 1.514106] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Aug 16 00:32:21 kernel: [ 1.657738] ehci_hcd 0000:00:1d.7: PCI INT A -> GSI 23 (level, low) -> IRQ 23 Aug 16 00:32:21 kernel: [ 1.673747] ehci_hcd 0000:00:1d.7: setting latency timer to 64 Aug 16 00:32:21 kernel: [ 1.673751] ehci_hcd 0000:00:1d.7: EHCI Host Controller Aug 16 00:32:21 kernel: [ 1.725224] ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 1 Aug 16 00:32:21 kernel: [ 1.741647] ehci_hcd 0000:00:1d.7: using broken periodic workaround Aug 16 00:32:21 kernel: [ 1.761790] ehci_hcd 0000:00:1d.7: cache line size of 32 is not supported Aug 16 00:32:21 kernel: [ 1.761873] ehci_hcd 0000:00:1d.7: irq 23, io mem 0xfdfff000 Aug 16 00:32:21 kernel: [ 1.796043] ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00 Aug 16 00:32:21 kernel: [ 1.879069] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002 Aug 16 00:32:21 kernel: [ 1.895446] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Aug 16 00:32:21 kernel: [ 1.911796] usb usb1: Product: EHCI Host Controller Aug 16 00:32:21 kernel: [ 1.928015] usb usb1: Manufacturer: Linux 2.6.32-5-686 ehci_hcd Aug 16 00:32:21 kernel: [ 1.944331] usb usb1: SerialNumber: 0000:00:1d.7 Aug 16 00:32:21 kernel: [ 1.961285] usb usb1: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 1.994412] hub 1-0:1.0: USB hub found Aug 16 00:32:21 kernel: [ 2.010864] hub 1-0:1.0: 8 ports detected Aug 16 00:32:21 kernel: [ 2.085939] uhci_hcd: USB Universal Host Controller Interface driver Aug 16 00:32:21 kernel: [ 2.191945] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 Aug 16 00:32:21 kernel: [ 2.226029] uhci_hcd 0000:00:1d.0: setting latency timer to 64 Aug 16 00:32:21 kernel: [ 2.226034] uhci_hcd 0000:00:1d.0: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.243237] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 2 Aug 16 00:32:21 kernel: [ 2.260390] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000fe00 Aug 16 00:32:21 kernel: [ 2.277517] usb usb2: New USB device found, idVendor=1d6b, idProduct=0001 Aug 16 00:32:21 kernel: [ 2.294815] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Aug 16 00:32:21 kernel: [ 2.312173] usb usb2: Product: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.329534] usb usb2: Manufacturer: Linux 2.6.32-5-686 uhci_hcd Aug 16 00:32:21 kernel: [ 2.346828] usb usb2: SerialNumber: 0000:00:1d.0 Aug 16 00:32:21 kernel: [ 2.412989] usb usb2: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 2.430651] usb 1-2: new high speed USB device using ehci_hcd and address 2 Aug 16 00:32:21 kernel: [ 2.449046] hub 2-0:1.0: USB hub found Aug 16 00:32:21 kernel: [ 2.466514] hub 2-0:1.0: 2 ports detected Aug 16 00:32:21 kernel: [ 2.484639] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19 Aug 16 00:32:21 kernel: [ 2.537750] uhci_hcd 0000:00:1d.1: setting latency timer to 64 Aug 16 00:32:21 kernel: [ 2.537756] uhci_hcd 0000:00:1d.1: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.555085] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 3 Aug 16 00:32:21 kernel: [ 2.572231] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000fd00 Aug 16 00:32:21 kernel: [ 2.589593] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001 Aug 16 00:32:21 kernel: [ 2.606869] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Aug 16 00:32:21 kernel: [ 2.624134] usb usb3: Product: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.641329] usb usb3: Manufacturer: Linux 2.6.32-5-686 uhci_hcd Aug 16 00:32:21 kernel: [ 2.658505] usb usb3: SerialNumber: 0000:00:1d.1 Aug 16 00:32:21 kernel: [ 2.675843] usb usb3: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 2.692864] hub 3-0:1.0: USB hub found Aug 16 00:32:21 kernel: [ 2.709651] hub 3-0:1.0: 2 ports detected Aug 16 00:32:21 kernel: [ 2.727378] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 Aug 16 00:32:21 kernel: [ 2.768252] uhci_hcd 0000:00:1d.2: setting latency timer to 64 Aug 16 00:32:21 kernel: [ 2.768258] uhci_hcd 0000:00:1d.2: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.806679] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 4 Aug 16 00:32:21 kernel: [ 2.824117] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000fc00 Aug 16 00:32:21 kernel: [ 2.841405] usb 1-2: New USB device found, idVendor=1058, idProduct=1104 Aug 16 00:32:21 kernel: [ 2.858448] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 Aug 16 00:32:21 kernel: [ 2.875347] usb 1-2: Product: My Book Aug 16 00:32:21 kernel: [ 2.892113] usb 1-2: Manufacturer: Western Digital Aug 16 00:32:21 kernel: [ 2.908915] usb 1-2: SerialNumber: 575532553130303530353538 Aug 16 00:32:21 kernel: [ 2.943242] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001 Aug 16 00:32:21 kernel: [ 2.960405] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Aug 16 00:32:21 kernel: [ 2.977615] usb usb4: Product: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.994687] usb usb4: Manufacturer: Linux 2.6.32-5-686 uhci_hcd Aug 16 00:32:21 kernel: [ 3.011711] usb usb4: SerialNumber: 0000:00:1d.2 Aug 16 00:32:21 kernel: [ 3.029589] usb usb4: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 3.082027] sd 2:0:0:0: [sda] Attached SCSI disk Aug 16 00:32:21 kernel: [ 3.103953] usb 1-2: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 3.122625] hub 4-0:1.0: USB hub found Aug 16 00:32:21 kernel: [ 3.140484] hub 4-0:1.0: 2 ports detected Aug 16 00:32:21 kernel: [ 3.161680] uhci_hcd 0000:00:1d.3: PCI INT D -> GSI 16 (level, low) -> IRQ 16 Aug 16 00:32:21 kernel: [ 3.181257] uhci_hcd 0000:00:1d.3: setting latency timer to 64 Aug 16 00:32:21 kernel: [ 3.181263] uhci_hcd 0000:00:1d.3: UHCI Host Controller Aug 16 00:32:21 kernel: [ 3.198614] uhci_hcd 0000:00:1d.3: new USB bus registered, assigned bus number 5 Aug 16 00:32:21 kernel: [ 3.216012] uhci_hcd 0000:00:1d.3: irq 16, io base 0x0000fb00 Aug 16 00:32:21 kernel: [ 3.249877] Uniform CD-ROM driver Revision: 3.20 Aug 16 00:32:21 kernel: [ 3.267765] usb usb5: New USB device found, idVendor=1d6b, idProduct=0001 Aug 16 00:32:21 kernel: [ 3.284947] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Aug 16 00:32:21 kernel: [ 3.302023] usb usb5: Product: UHCI Host Controller Aug 16 00:32:21 kernel: [ 3.319215] usb usb5: Manufacturer: Linux 2.6.32-5-686 uhci_hcd Aug 16 00:32:21 kernel: [ 3.336298] usb usb5: SerialNumber: 0000:00:1d.3 Aug 16 00:32:21 kernel: [ 3.368377] Initializing USB Mass Storage driver... Aug 16 00:32:21 kernel: [ 3.390652] usbcore: registered new interface driver hiddev Aug 16 00:32:21 kernel: [ 3.408109] scsi4 : SCSI emulation for USB Mass Storage devices Aug 16 00:32:21 kernel: [ 3.425281] sr 0:0:1:0: Attached scsi CD-ROM sr0 Aug 16 00:32:21 kernel: [ 3.438978] sr 0:0:1:0: Attached scsi generic sg0 type 5 Aug 16 00:32:21 kernel: [ 3.456328] usbcore: registered new interface driver usb-storage Aug 16 00:32:21 kernel: [ 3.474564] usb-storage: device found at 2 Aug 16 00:32:21 kernel: [ 3.474567] usb-storage: waiting for device to settle before scanning Aug 16 00:32:21 kernel: [ 3.475320] sd 2:0:0:0: Attached scsi generic sg1 type 0 Aug 16 00:32:21 kernel: [ 3.492587] USB Mass Storage support registered. Aug 16 00:32:21 kernel: [ 3.510930] usb usb5: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 3.531076] hub 5-0:1.0: USB hub found Aug 16 00:32:21 kernel: [ 3.548399] hub 5-0:1.0: 2 ports detected Aug 16 00:32:21 kernel: [ 3.591743] input: Western Digital My Book as /devices/pci0000:00/0000:00:1d.7/usb1/1-2/1-2:1.1/input/input2 Aug 16 00:32:21 kernel: [ 3.609515] generic-usb 0003:1058:1104.0001: input,hidraw0: USB HID v1.11 Device [Western Digital My Book] on usb-0000:00:1d.7-2/input1 Aug 16 00:32:21 kernel: [ 3.627466] usbcore: registered new interface driver usbhid Aug 16 00:32:21 kernel: [ 8.581664] usb-storage: device scan complete Aug 16 00:32:21 kernel: [ 8.624270] scsi 4:0:0:0: Direct-Access WD My Book 1008 PQ: 0 ANSI: 4 Aug 16 00:32:21 kernel: [ 8.655135] scsi 4:0:0:1: Enclosure WD My Book Device 1008 PQ: 0 ANSI: 4 Aug 16 00:32:21 kernel: [ 8.675393] sd 4:0:0:0: Attached scsi generic sg2 type 0 Aug 16 00:32:21 kernel: [ 8.698669] scsi 4:0:0:1: Attached scsi generic sg3 type 13 Aug 16 00:32:21 kernel: [ 8.723370] sd 4:0:0:0: [sdb] 1953513472 512-byte logical blocks: (1.00 TB/931 GiB) Aug 16 00:32:21 kernel: [ 8.750477] sd 4:0:0:0: [sdb] Write Protect is off Aug 16 00:32:21 kernel: [ 8.769411] sd 4:0:0:0: [sdb] Mode Sense: 10 00 00 00 Aug 16 00:32:21 kernel: [ 8.769414] sd 4:0:0:0: [sdb] Assuming drive cache: write through Aug 16 00:32:21 kernel: [ 8.822971] sd 4:0:0:0: [sdb] Assuming drive cache: write through Aug 16 00:32:21 kernel: [ 8.841978] sdb: sdb1 Aug 16 00:32:21 kernel: [ 8.905580] sd 4:0:0:0: [sdb] Assuming drive cache: write through Aug 16 00:32:21 kernel: [ 8.924173] sd 4:0:0:0: [sdb] Attached SCSI disk Aug 16 00:32:21 kernel: [ 11.600492] XFS mounting filesystem sdb1 Aug 16 00:32:21 kernel: [ 12.222948] Ending clean XFS mount for filesystem: sdb1 After a while the following appears in a log: Aug 16 09:30:56 kernel: [32359.112029] usb 1-2: reset high speed USB device using ehci_hcd and address 2 Aug 16 09:31:59 kernel: [32422.112035] usb 1-2: reset high speed USB device using ehci_hcd and address 2 Aug 16 09:33:00 kernel: [32483.112029] usb 1-2: reset high speed USB device using ehci_hcd and address 2 And then it is followed by few kernel dumps, which I think, are not good: Aug 16 09:33:40 kernel: [32520.428027] INFO: task xfssyncd:1002 blocked for more than 120 seconds. Aug 16 09:33:40 kernel: [32520.462689] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 16 09:33:40 kernel: [32520.497422] xfssyncd D c3d84a60 0 1002 2 0x00000000 Aug 16 09:33:40 kernel: [32520.532117] f6c9aa80 00000046 c1132742 c3d84a60 00000286 c1418100 c1418100 00000000 Aug 16 09:33:40 kernel: [32520.566867] f6c9ac3c c2808100 00000000 f653b18b 00001d76 00000001 f6c9aa80 c3c3f0e0 Aug 16 09:33:40 kernel: [32520.601343] 08e59242 f6c9ac3c 2e41392b 00000000 08e59242 00000000 c3f7fb48 0067385a Aug 16 09:33:40 kernel: [32520.635533] Call Trace: Aug 16 09:33:40 kernel: [32520.668991] [<c1132742>] ? cfq_set_request+0x0/0x290 Aug 16 09:33:40 kernel: [32520.702804] [<c126b532>] ? io_schedule+0x5f/0x98 Aug 16 09:33:40 kernel: [32520.736555] [<c1128be0>] ? get_request_wait+0xcb/0x146 Aug 16 09:33:40 kernel: [32520.770360] [<c10437ba>] ? autoremove_wake_function+0x0/0x2d Aug 16 09:33:40 kernel: [32520.804110] [<c112907c>] ? __make_request+0x2cc/0x3d9 Aug 16 09:33:40 kernel: [32520.837713] [<c1128230>] ? blk_peek_request+0x135/0x143 Aug 16 09:33:40 kernel: [32520.871265] [<f8582987>] ? scsi_dispatch_cmd+0x185/0x1e5 [scsi_mod] Aug 16 09:33:40 kernel: [32520.904407] [<c1127cf1>] ? generic_make_request+0x266/0x2b4 Aug 16 09:33:40 kernel: [32520.937007] [<c10cf821>] ? bvec_alloc_bs+0x95/0xaf Aug 16 09:33:40 kernel: [32520.969033] [<c1127dfb>] ? submit_bio+0xbc/0xd6 Aug 16 09:33:40 kernel: [32521.000485] [<c10cffd1>] ? bio_add_page+0x28/0x2e Aug 16 09:33:40 kernel: [32521.031403] [<f8918d38>] ? _xfs_buf_ioapply+0x206/0x22b [xfs] Aug 16 09:33:40 kernel: [32521.061888] [<f89197bd>] ? xfs_buf_iorequest+0x38/0x60 [xfs] Aug 16 09:33:40 kernel: [32521.091845] [<f8907230>] ? xlog_bdstrat_cb+0x16/0x3d [xfs] Aug 16 09:33:40 kernel: [32521.121222] [<f8905781>] ? XFS_bwrite+0x32/0x64 [xfs] Aug 16 09:33:40 kernel: [32521.150007] [<f89059be>] ? xlog_sync+0x20b/0x311 [xfs] Aug 16 09:33:40 kernel: [32521.178214] [<f89112fc>] ? xfs_trans_ail_tail+0x12/0x27 [xfs] Aug 16 09:33:40 kernel: [32521.205914] [<f8906261>] ? xlog_state_sync_all+0xa2/0x141 [xfs] Aug 16 09:33:40 kernel: [32521.233074] [<f8906611>] ? _xfs_log_force+0x51/0x68 [xfs] Aug 16 09:33:40 kernel: [32521.259664] [<c103abaf>] ? process_timeout+0x0/0x5 Aug 16 09:33:40 kernel: [32521.285662] [<f8906636>] ? xfs_log_force+0xe/0x27 [xfs] Aug 16 09:33:40 kernel: [32521.311171] [<f89202df>] ? xfs_sync_worker+0x17/0x5c [xfs] Aug 16 09:33:40 kernel: [32521.336117] [<f891fbb7>] ? xfssyncd+0x134/0x17d [xfs] Aug 16 09:33:40 kernel: [32521.360498] [<f891fa83>] ? xfssyncd+0x0/0x17d [xfs] Aug 16 09:33:40 kernel: [32521.384211] [<c1043588>] ? kthread+0x61/0x66 Aug 16 09:33:40 kernel: [32521.407890] [<c1043527>] ? kthread+0x0/0x66 Aug 16 09:33:40 kernel: [32521.430876] [<c1003d47>] ? kernel_thread_helper+0x7/0x10 Aug 16 09:33:40 kernel: [32521.453394] INFO: task flush-8:16:12945 blocked for more than 120 seconds. Aug 16 09:33:40 kernel: [32521.476116] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 16 09:33:40 kernel: [32521.498579] flush-8:16 D 00000000 0 12945 2 0x00000000 Aug 16 09:33:40 kernel: [32521.520649] f4e4d540 00000046 e412e940 00000000 00000002 c1418100 c1418100 c14136ac Aug 16 09:33:40 kernel: [32521.542426] f4e4d6fc c2808100 00000000 00000000 000008b4 00000001 f4e4d540 c3c3f0e0 Aug 16 09:33:40 kernel: [32521.563745] 02e905a8 f4e4d6fc 007a5399 00000000 02e905a8 00000000 f4e2db48 00670b98 Aug 16 09:33:40 kernel: [32521.585077] Call Trace: Aug 16 09:33:40 kernel: [32521.605790] [<c126b532>] ? io_schedule+0x5f/0x98 Aug 16 09:33:40 kernel: [32521.626184] [<c1128be0>] ? get_request_wait+0xcb/0x146 Aug 16 09:33:40 kernel: [32521.646133] [<c10437ba>] ? autoremove_wake_function+0x0/0x2d Aug 16 09:33:40 kernel: [32521.665659] [<c112907c>] ? __make_request+0x2cc/0x3d9 Aug 16 09:33:40 kernel: [32521.684716] [<f891796e>] ? xfs_convert_page+0x30a/0x331 [xfs] Aug 16 09:33:40 kernel: [32521.703366] [<c1127cf1>] ? generic_make_request+0x266/0x2b4 Aug 16 09:33:40 kernel: [32521.721644] [<c10cf821>] ? bvec_alloc_bs+0x95/0xaf Aug 16 09:33:40 kernel: [32521.739465] [<c1127dfb>] ? submit_bio+0xbc/0xd6 Aug 16 09:33:40 kernel: [32521.756896] [<c10cfa45>] ? bio_alloc_bioset+0x7b/0xba Aug 16 09:33:40 kernel: [32521.774046] [<f8917af0>] ? xfs_submit_ioend_bio+0x3b/0x44 [xfs] Aug 16 09:33:40 kernel: [32521.790694] [<f8917ba3>] ? xfs_submit_ioend+0xaa/0xc4 [xfs] Aug 16 09:33:40 kernel: [32521.806736] [<f891817d>] ? xfs_page_state_convert+0x5c0/0x61c [xfs] Aug 16 09:33:40 kernel: [32521.822859] [<c113705b>] ? __lookup_tag+0x8e/0xee Aug 16 09:33:40 kernel: [32521.838958] [<f891840d>] ? xfs_vm_writepage+0x91/0xc4 [xfs] Aug 16 09:33:40 kernel: [32521.855039] [<c108bbcc>] ? __writepage+0x8/0x22 Aug 16 09:33:40 kernel: [32521.871067] [<c108c17b>] ? write_cache_pages+0x1af/0x29f Aug 16 09:33:40 kernel: [32521.886616] [<c108bbc4>] ? __writepage+0x0/0x22 Aug 16 09:33:40 kernel: [32521.901593] [<c108c285>] ? generic_writepages+0x1a/0x21 Aug 16 09:33:40 kernel: [32521.916455] [<f8918338>] ? xfs_vm_writepages+0x0/0x38 [xfs] Aug 16 09:33:40 kernel: [32521.931484] [<c108c2a5>] ? do_writepages+0x19/0x25 Aug 16 09:33:40 kernel: [32521.946648] [<c10c80d9>] ? writeback_single_inode+0xc7/0x273 Aug 16 09:33:40 kernel: [32521.961675] [<c10c8c44>] ? writeback_inodes_wb+0x3dd/0x49c Aug 16 09:33:40 kernel: [32521.976831] [<c10c8e18>] ? wb_writeback+0x115/0x178 Aug 16 09:33:40 kernel: [32521.991778] [<c10c901f>] ? wb_do_writeback+0x121/0x131 Aug 16 09:33:40 kernel: [32522.006538] [<c103abaf>] ? process_timeout+0x0/0x5 Aug 16 09:33:40 kernel: [32522.021091] [<c10c9050>] ? bdi_writeback_task+0x21/0x89 Aug 16 09:33:40 kernel: [32522.035493] [<c10979e5>] ? bdi_start_fn+0x59/0xa4 Aug 16 09:33:40 kernel: [32522.049765] [<c109798c>] ? bdi_start_fn+0x0/0xa4 Aug 16 09:33:40 kernel: [32522.063792] [<c1043588>] ? kthread+0x61/0x66 Aug 16 09:33:40 kernel: [32522.077612] [<c1043527>] ? kthread+0x0/0x66 Aug 16 09:33:40 kernel: [32522.091260] [<c1003d47>] ? kernel_thread_helper+0x7/0x10 Aug 16 09:33:40 kernel: [32522.104966] INFO: task smartctl:13098 blocked for more than 120 seconds. Aug 16 09:33:40 kernel: [32522.118883] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 16 09:33:40 kernel: [32522.133012] smartctl D 00000020 0 13098 13097 0x00000000 Aug 16 09:33:40 kernel: [32522.147221] e50b9540 00000086 c11d28a8 00000020 00000770 c1418100 c1418100 c14136ac Aug 16 09:33:40 kernel: [32522.161720] e50b96fc c2808100 00000000 e53e8800 00000000 00000020 c3cec000 c13886c0 Aug 16 09:33:40 kernel: [32522.176217] f99dab68 e50b96fc 007a4f1e 00000001 c4082f24 c4082ed8 00000001 c3c3f0e0 Aug 16 09:33:40 kernel: [32522.190737] Call Trace: Aug 16 09:33:40 kernel: [32522.205038] [<c11d28a8>] ? __netdev_alloc_skb+0x14/0x2d Aug 16 09:33:40 kernel: [32522.219605] [<c126b799>] ? schedule_timeout+0x20/0xb0 Aug 16 09:33:40 kernel: [32522.234144] [<c112820d>] ? blk_peek_request+0x112/0x143 Aug 16 09:33:40 kernel: [32522.248649] [<f85873b6>] ? scsi_request_fn+0x3c1/0x47a [scsi_mod] Aug 16 09:33:40 kernel: [32522.263233] [<c103aba8>] ? del_timer+0x55/0x5c Aug 16 09:33:40 kernel: [32522.277773] [<c126b6a2>] ? wait_for_common+0xa4/0x100 Aug 16 09:33:40 kernel: [32522.292342] [<c102cd8d>] ? default_wake_function+0x0/0x8 Aug 16 09:33:40 kernel: [32522.306958] [<c112b3d1>] ? blk_execute_rq+0x8b/0xb2 Aug 16 09:33:40 kernel: [32522.321569] [<c112b2ac>] ? blk_end_sync_rq+0x0/0x23 Aug 16 09:33:40 kernel: [32522.336070] [<c112b58b>] ? blk_recount_segments+0x13/0x20 Aug 16 09:33:40 kernel: [32522.350583] [<c1127307>] ? blk_rq_bio_prep+0x44/0x74 Aug 16 09:33:40 kernel: [32522.365059] [<c112b0b2>] ? blk_rq_map_kern+0xc5/0xee Aug 16 09:33:40 kernel: [32522.379439] [<c112e2a5>] ? sg_scsi_ioctl+0x221/0x2aa Aug 16 09:33:40 kernel: [32522.393801] [<c112e672>] ? scsi_cmd_ioctl+0x344/0x39a Aug 16 09:33:40 kernel: [32522.408140] [<c1024c87>] ? update_curr+0x106/0x1b3 Aug 16 09:33:40 kernel: [32522.422566] [<c1024c87>] ? update_curr+0x106/0x1b3 Aug 16 09:33:40 kernel: [32522.436832] [<f87676aa>] ? sd_ioctl+0x90/0xb5 [sd_mod] Aug 16 09:33:40 kernel: [32522.451228] [<c112c35f>] ? __blkdev_driver_ioctl+0x53/0x63 Aug 16 09:33:40 kernel: [32522.465689] [<c112cbbf>] ? blkdev_ioctl+0x850/0x891 Aug 16 09:33:40 kernel: [32522.479982] [<c1020474>] ? __wake_up_common+0x34/0x59 Aug 16 09:33:40 kernel: [32522.494138] [<c10244cd>] ? complete+0x28/0x36 Aug 16 09:33:40 kernel: [32522.507986] [<c1086c64>] ? find_get_page+0x1f/0x81 Aug 16 09:33:40 kernel: [32522.521671] [<c10abed5>] ? add_partial+0xe/0x40 Aug 16 09:33:40 kernel: [32522.535285] [<c1086e68>] ? lock_page+0x8/0x1d Aug 16 09:33:40 kernel: [32522.548797] [<c1087432>] ? filemap_fault+0xb5/0x2e6 Aug 16 09:33:40 kernel: [32522.562141] [<c109941c>] ? __do_fault+0x381/0x3b1 Aug 16 09:33:40 kernel: [32522.575441] [<c10d0c30>] ? block_ioctl+0x27/0x2c Aug 16 09:33:40 kernel: [32522.588708] [<c10d0c09>] ? block_ioctl+0x0/0x2c Aug 16 09:33:40 kernel: [32522.601858] [<c10bcd78>] ? vfs_ioctl+0x1c/0x5f Aug 16 09:33:40 kernel: [32522.614917] [<c10bd30c>] ? do_vfs_ioctl+0x4aa/0x4e5 Aug 16 09:33:40 kernel: [32522.627961] [<c10350db>] ? __do_softirq+0x115/0x151 Aug 16 09:33:40 kernel: [32522.640901] [<c126e270>] ? do_page_fault+0x2f1/0x307 Aug 16 09:33:40 kernel: [32522.653803] [<c10bd388>] ? sys_ioctl+0x41/0x58 Aug 16 09:33:40 kernel: [32522.666674] [<c10030fb>] ? sysenter_do_call+0x12/0x28 Then again few messages reset high speed USB device using ehci_hcd and address 2. I have browsed and read similar error reports here and there and I tried: I have upgraded the kernel from v2.6.26-2 to 2.6.32-5, which has not solved the problem. They say, this might a cable problem. I have tried to replace the USB-to-miniUSB cable (that connects external drive with computer) with another one. No changes. Somebody suggests to try another USB port. I have only 4 external USB ports, tried another one with no success. They say to try uhci_hcd. I have unmounted the device, unloaded ehci_hcd and mounted again. The difference was that now in log I get reset full speed USB device using uhci_hcd and address 2 and similar kernel dumps after a while. They say to echo 128 > /sys/block/sdb/device/max_sectors. I tried it with ehci_hcd with no success (note: I have issued this command after the drive was mounted but before using it actively). I have lauched smartmond and checking periodically the output of smartctl: drive temperature is OK, number of bad sectors and uncorrectable errors is 0. Nothing suspicious is reported by S.M.A.R.T. except maybe the following: Aug 16 12:40:12 kernel: [43715.314566] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO Aug 16 12:40:13 kernel: [43715.705622] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO Of course, I have not tried all combinations of above. But unfortunately, I am run out of cardinal ideas. If anybody can advice something specific about the problem, you are very welcome.

    Read the article

  • Upgrade 10g Osso to 11g OAM (Part 2)

    - by Pankaj Chandiramani
    This is part 2 of http://blogs.oracle.com/pankaj/2010/11/upgrade_10g_osso_to_11g_oam.html So last post we saw the overview of upgrading osso to oam11g . Now some more details on same . As we are using the co-existence feature , we have to install the OAM server and upgrade the existing OSSO 10g server to the OAM servers. OAM Upgrade Steps Overview Pre-Req : You already have a OAM 11g Installed Upgrade Step 1: Configure User Store & Make it Primary Upgrade Step 2: Create Policy Domain , this is dome by UA automatically Upgrade Step 3: Migrate Partners : This is done by running Upgrade Assistant Verify successful Upgrade Details on UA step : To Upgrade the existing OSSO 10g servers to OAM server , this is done by running the UA script in OAM , which copies over all the partner app details from osso to OAM 11g , run_ua.sh is the script name which will ask you to input the Policies.properties from SSO $OH/sso/config folder of osso 10g & other variables like db password . Some pointers Upgrading oso to Oam 11g , by default enables the coexistence mode on the OAM Server Front-end the OAM server with the same Load Balancer that is the front end of the OSSO 10g servers. Now, OAM and OSSO 10g servers are working in a co-exist mode. OAM 11g is made to understand 10g OSSO Token format and session handling capabilities so as to co-exist with 10g OSSO servers./li How to test ? Try to access the partner applications and verify that single sign on works. Also, verify that user does not have to login in if the user is already authenticated by either OAM or OSSO 10g server. Screen-shots & Troubleshooting tips to be followed .......

    Read the article

  • $PATH issues with OSX Lion

    - by Mikey
    I'm having some issues with running mysql from terminal: macmini:~ michael$ which mysql /Applications/XAMPP/xamppfiles/bin/mysql macmini:~ michael$ mysql -bash: /usr/local/mysql/bin/mysql: No such file or directory I had a previous installation at /usr/local/mysql/bin/mysql which no longer exists. My path variable is as follows: macmini:~ michael$ echo $PATH /usr/bin:/bin:/usr/sbin:/sbin:/Applications/XAMPP/xamppfiles/bin/:/usr/local/bin:/usr/X11/bin:/usr/local/MacGPG2/bin:/usr/texbin Dropping to root seems to function correctly: macmini:~ michael$ sudo bash Password: bash-3.2# mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 66 Server version: 5.1.44 Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> I seem to have found the issue - but I'm not sure how to change or remove this alias macmini:~ michael$ type -a mysql mysql is aliased to `/usr/local/mysql/bin/mysql' mysql is /Applications/XAMPP/xamppfiles/bin/mysql mysql is /Applications/XAMPP/xamppfiles/bin/mysql

    Read the article

  • How about a new platform for your next API&hellip; a CMS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/05/22/how-about-a-new-platform-for-your-next-apihellip-a.aspxSay what? I’m seeing a type of API emerge which serves static or long-lived resources, which are mostly read-only and have a controlled process to update the data that gets served. Think of something like an app configuration API, where you want a central location for changeable settings. You could use this server side to store database connection strings and keep all your instances in sync, or it could be used client side to push changes out to all users (and potentially driving A/B or MVT testing). That’s a good candidate for a RESTful API which makes proper use of HTTP expiration and validation caching to minimise traffic, but really you want a front end UI where you can edit the current config that the API returns and publish your changes. Sound like a Content Mangement System would be a good fit? I’ve been looking at that and it’s a great fit for this scenario. You get a lot of what you need out of the box, the amount of custom code you need to write is minimal, and you get a whole lot of extra stuff from using CMS which is very useful, but probably not something you’d build if you had to put together a quick UI over your API content (like a publish workflow, fine-grained security and an audit trail). You typically use a CMS for HTML resources, but it’s simple to expose JSON instead – or to do content negotiation to support both, so you can open a resource in a browser and see a nice visual representation, or request it with: Accept=application/json and get the same content rendered as JSON for the app to use. Enter Umbraco Umbraco is an open source .NET CMS that’s been around for a while. It has very good adoption, a lively community and a good release cycle. It’s easy to use, has all the functionality you need for a CMS-driven API, and it’s scalable (although you won’t necessarily put much scale on the CMS layer). In the rest of this post, I’ll build out a simple app config API using Umbraco. We’ll define the structure of the configuration resource by creating a new Document Type and setting custom properties; then we’ll build a very simple Razor template to return configuration documents as JSON; then create a resource and see how it looks. And we’ll look at how you could build this into a wider solution. If you want to try this for yourself, it’s ultra easy – there’s an Umbraco image in the Azure Website gallery, so all you need to to is create a new Website, select Umbraco from the image and complete the installation. It will create a SQL Azure website to store all the content, as well as a Website instance for editing and accessing content. They’re standard Azure resources, so you can scale them as you need. The default install creates a starter site for some HTML content, which you can use to learn your way around (or just delete). 1. Create Configuration Document Type In Umbraco you manage content by creating and modifying documents, and every document has a known type, defining what properties it holds. We’ll create a new Document Type to describe some basic config settings. In the Settings section from the left navigation (spanner icon), expand Document Types and Master, hit the ellipsis and select to create a new Document Type: This will base your new type off the Master type, which gives you some existing properties that we’ll use – like the Page Title which will be the resource URL. In the Generic Properties tab for the new Document Type, you set the properties you’ll be able to edit and return for the resource: Here I’ve added a text string where I’ll set a default cache lifespan, an image which I can use for a banner display, and a date which could show the user when the next release is due. This is the sort of thing that sits nicely in an app config API. It’s likely to change during the life of the product, but not very often, so it’s good to have a centralised place where you can make and publish changes easily and safely. It also enables A/B and MVT testing, as you can change the response each client gets based on your set logic, and their apps will behave differently without needing a release. 2. Define the response template Now we’ve defined the structure of the resource (as a document), in Umbraco we can define a C# Razor template to say how that resource gets rendered to the client. If you only want to provide JSON, it’s easy to render the content of the document by building each property in the response (Umbraco uses dynamic objects so you can specify document properties as object properties), or you can support content negotiation with very little effort. Here’s a template to render the document as HTML or JSON depending on the Accept header, using JSON.NET for the API rendering: @inherits Umbraco.Web.Mvc.UmbracoTemplatePage @using Newtonsoft.Json @{ Layout = null; } @if(UmbracoContext.HttpContext.Request.Headers["accept"] != null &amp;&amp; UmbracoContext.HttpContext.Request.Headers["accept"] == "application/json") { Response.ContentType = "application/json"; @Html.Raw(JsonConvert.SerializeObject(new { cacheLifespan = CurrentPage.cacheLifespan, bannerImageUrl = CurrentPage.bannerImage, nextReleaseDate = CurrentPage.nextReleaseDate })) } else { <h1>App configuration</h1> <p>Cache lifespan: <b>@CurrentPage.cacheLifespan</b></p> <p>Banner Image: </p> <img src="@CurrentPage.bannerImage"> <p>Next Release Date: <b>@CurrentPage.nextReleaseDate</b></p> } That’s a rough-and ready example of what you can do. You could make it completely generic and just render all the document’s properties as JSON, but having a specific template for each resource gives you control over what gets sent out. And the templates are evaluated at run-time, so if you need to change the output – or extend it, say to add caching response headers – you just edit the template and save, and the next client request gets rendered from the new template. No code to build and ship. 3. Create the content With your document type created, in  the Content pane you can create a new instance of that document, where Umbraco gives you a nice UI to input values for the properties we set up on the Document Type: Here I’ve set the cache lifespan to an xs:duration value, uploaded an image for the banner and specified a release date. Each property gets the appropriate input control – text box, file upload and date picker. At the top of the page is the name of the resource – myapp in this example. That specifies the URL for the resource, so if I had a DNS entry pointing to my Umbraco instance, I could access the config with a URL like http://static.x.y.z.com/config/myapp. The setup is all done now, so when we publish this resource it’ll be available to access.  4. Access the resource Now if you open  that URL in the browser, you’ll see the HTML version rendered: - complete with the  image and formatted date. Umbraco lets you save changes and preview them before publishing, so the HTML view could be a good way of showing editors their changes in a usable view, before they confirm them. If you browse the same URL from a REST client, specifying the Accept=application/json request header, you get this response:   That’s the exact same resource, with a managed UI to publish it, being accessed as HTML or JSON with a tiny amount of effort. 5. The wider landscape If you have fairy stable content to expose as an API, I think  this approach is really worth considering. Umbraco scales very nicely, but in a typical solution you probably wouldn’t need it to. When you have additional requirements, like logging API access requests - but doing it out-of-band so clients aren’t impacted, you can put a very thin API layer on top of Umbraco, and cache the CMS responses in your API layer:   Here the API does a passthrough to CMS, so the CMS still controls the content, but it caches the response. If the response is cached for 1 minute, then Umbraco only needs to handle 1 request per minute (multiplied by the number of API instances), so if you need to support 1000s of request per second, you’re scaling a thin, simple API layer rather than having to scale the more complex CMS infrastructure (including the database). This diagram also shows an approach to logging, by asynchronously publishing a message to a queue (Redis in this case), which can be picked up later and persisted by a different process. Does it work? Beautifully. Using Azure, I spiked the solution above (including the Redis logging framework which I’ll blog about later) in half a day. That included setting up different roles in Umbraco to demonstrate a managed workflow for publishing changes, and a couple of document types representing different resources. Is it maintainable? We have three moving parts, which are all managed resources in Azure –  an Azure Website for Umbraco which may need a couple of instances for HA (or may not, depending on how long the content can be cached), a message queue (Redis is in preview in Azure, but you can easily use Service Bus Queues if performance is less of a concern), and the Web Role for the API. Two of the components are off-the-shelf, from open source projects, and the only custom code is the API which is very simple. Does it scale? Pretty nicely. With a single Umbraco instance running as an Azure Website, and with 4x instances for my API layer (Standard sized Web Roles), I got just under 4,000 requests per second served reliably, with a Worker Role in the background saving the access logs. So we had a nice UI to publish app config changes, with a friendly Web preview and a publishing workflow, capable of supporting 14 million requests in an hour, with less than a day’s effort. Worth considering if you’re publishing long-lived resources through your API.

    Read the article

  • "Not enough memory" error when running Peachtree Accounting 2010 with Logmein

    - by Justin Bowers
    I setup one of my clients with a Logmein Pro account recently and everything has been running fine so far. Today, however, my client received an error when they were logged into their work computer remotely and tried to run a report from Peachtree Accounting 2010. The error states: "Not enough memory to run" and she can't print the report. I looked online and found another user that had this same problem with Peachtree 2010 when connected via Logmein here. The error doesn't happen if the user runs the report while sitting at her work computer, but it always happens when connected via Logmein. I know most of you will push it off as just a Logmein issue, which I'm sure it is - but I would like any constructive input from anyone else who may have had the same or similar problem. Thanks, Justin

    Read the article

  • Webmin and/or port 10000 not working

    - by DisgruntledGoat
    I've recently installed Webmin on a Ubuntu server but I can't get it to work. I asked a recent question about saving iptables but it turns out you don't need to "save" iptables changes. Anyway, I still can't get Webmin working after opening the port up: iptables -A INPUT -p tcp -m tcp --dport 10000 -j ACCEPT It seems that either the command is not opening up port 10000, or there is a separate problem with Webmin. If I run iptables -L I see lines like the following, but no port 10000: ACCEPT tcp -- anywhere anywhere tcp dpt:5555 state NEW ACCEPT tcp -- anywhere anywhere tcp dpt:8002 state NEW ACCEPT tcp -- anywhere anywhere tcp dpt:9001 state NEW However, there is a line: ACCEPT tcp -- anywhere anywhere tcp dpt:webmin Any ideas why Webmin is not working? The IP address works fine and we can view web sites on the server, but https://[ip]:10000/ (or http) doesn't work.

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • Windows Vista language text service problem

    - by Azho KG
    Hi, All I'm using English version of Vista and having problems with using programs that display Russian characters somewhere. For example dictionaries doesn't work for me, since they display Russian character. Also I see just "magic" characters in text editor (notepad) when open a Russian text file. I tried to change whole Vista Interface language to Russian, but it still didn't solve the problem. I CAN read any web page from browser, that's not a problem. Also adding "Russian" in "Text Services and Input Languages" doesn't solve this problem. Does anyone know how to solve this? Thanks. My System: 32-bit Windows Vista Home Premium - SP2

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >