Search Results

Search found 88 results on 4 pages for 'finer recliner'.

Page 1/4 | 1 2 3 4  | Next Page >

  • How to get the finer resolution for asus eee pc 1000hd after installing windows7?

    - by vij
    Hi i have installed windows 7 on my 2 year old ASUS EEE PC 1000HD. The default Operating system it came with was windows xp. Now the system is working fine but there are some display problems which i am going to state now: Firstly, during booting windows 7 logo is missing. There are no options for resolution changes. The only option possible is 800x600. The boot time is quite long when compared to xp. The graphics driver used in this system is STANDARD VGA GRAPHICS ADAPTER. The processor is intel celeron with 1GB RAM. I think with these requirements it is possible to get windows 7 with finer resolution. But why these problems are occurring?I even tried to install graphics driver provided by intel for this system. But i ve got the error message "this system doesn't meet the necessary requirements". So how to overcome this problem??

    Read the article

  • Can you force a crash if a write occurs to a given memory location with finer than page granularity?

    - by Joseph Garvin
    I'm writing a program that for performance reasons uses shared memory (alternatives have been evaluated, and they are not fast enough for my task, so suggestions to not use it will be downvoted). In the shared memory region I am writing many structs of a fixed size. There is one program responsible for writing the structs into shared memory, and many clients that read from it. However, there is one member of each struct that clients need to write to (a reference count, which they will update atomically). All of the other members should be read only to the clients. Because clients need to change that one member, they can't map the shared memory region as read only. But they shouldn't be tinkering with the other members either, and since these programs are written in C++, memory corruption is possible. Ideally, it should be as difficult as possible for one client to crash another. I'm only worried about buggy clients, not malicious ones, so imperfect solutions are allowed. I can try to stop clients from overwriting by declaring the members in the header they use as const, but that won't prevent memory corruption (buffer overflows, bad casts, etc.) from overwriting. I can insert canaries, but then I have to constantly pay the cost of checking them. Instead of storing the reference count member directly, I could store a pointer to the actual data in a separate mapped write only page, while keeping the structs in read only mapped pages. This will work, the OS will force my application to crash if I try to write to the pointed to data, but indirect storage can be undesirable when trying to write lock free algorithms, because needing to follow another level of indirection can change whether something can be done atomically. Is there any way to mark smaller areas of memory such that writing them will cause your app to blow up? Some platforms have hardware watchpoints, and maybe I could activate one of those with inline assembly, but I'd be limited to only 4 at a time on 32-bit x86 and each one could only cover part of the struct because they're limited to 4 bytes. It'd also make my program painful to debug ;)

    Read the article

  • Interfacing HTTPBuilder and HTMLUnit... some code

    - by Misha Koshelev
    Ok, this isn't even a question: import com.gargoylesoftware.htmlunit.HttpMethod import com.gargoylesoftware.htmlunit.WebClient import com.gargoylesoftware.htmlunit.WebResponseData import com.gargoylesoftware.htmlunit.WebResponseImpl import com.gargoylesoftware.htmlunit.util.Cookie import com.gargoylesoftware.htmlunit.util.NameValuePair import static groovyx.net.http.ContentType.TEXT import java.io.File import java.util.logging.Logger import org.apache.http.impl.cookie.BasicClientCookie /** * HTTPBuilder class * * Allows Javascript processing using HTMLUnit * * @author Misha Koshelev */ class HTTPBuilder { /** * HTTP Builder - implement this way to avoid underlying logging output */ def httpBuilder /** * Logger */ def logger /** * Directory for storing HTML files, if any */ def saveDirectory=null /** * Index of current HTML file in directory */ def saveIdx=1 /** * Current page text */ def text=null /** * Response for processJavascript (Complex Version) */ def resp=null /** * URI for processJavascript (Complex Version) */ def uri=null /** * HttpMethod for processJavascript (Complex Version) */ def method=null /** * Default constructor */ public HTTPBuilder() { // New HTTPBuilder httpBuilder=new groovyx.net.http.HTTPBuilder() // Logging logger=Logger.getLogger(this.class.name) } /** * Constructor that allows saving output files for testing */ public HTTPBuilder(saveDirectory,saveIdx) { this() this.saveDirectory=saveDirectory this.saveIdx=saveIdx } /** * Save text and return corresponding XmlSlurper object */ public saveText() { if (saveDirectory) { def file=new File(saveDirectory.toString()+File.separator+saveIdx+".html") logger.finest "HTTPBuilder.saveText: file=\""+file.toString()+"\"" file<<text saveIdx++ } new XmlSlurper(new org.cyberneko.html.parsers.SAXParser()).parseText(text) } /** * Wrapper around supertype get method */ public Object get(Map<String,?> args) { logger.finer "HTTPBuilder.get: args=\""+args+"\"" args.contentType=TEXT httpBuilder.get(args) { resp,reader-> text=reader.text this.resp=resp this.uri=args.uri this.method=HttpMethod.GET saveText() } } /** * Wrapper around supertype post method */ public Object post(Map<String,?> args) { logger.finer "HTTPBuilder.post: args=\""+args+"\"" args.contentType=TEXT httpBuilder.post(args) { resp,reader-> text=reader.text this.resp=resp this.uri=args.uri this.method=HttpMethod.POST saveText() } } /** * Load cookies from specified file */ def loadCookies(file) { logger.finer "HTTPBuilder.loadCookies: file=\""+file.toString()+"\"" file.withObjectInputStream { ois-> ois.readObject().each { cookieMap-> def cookie=new BasicClientCookie(cookieMap.name,cookieMap.value) cookieMap.remove("name") cookieMap.remove("value") cookieMap.entrySet().each { entry-> cookie."${entry.key}"=entry.value } httpBuilder.client.cookieStore.addCookie(cookie) } } } /** * Save cookies to specified file */ def saveCookies(file) { logger.finer "HTTPBuilder.saveCookies: file=\""+file.toString()+"\"" def cookieMaps=new ArrayList(new LinkedHashMap()) httpBuilder.client.cookieStore.getCookies().each { cookie-> def cookieMap=[:] cookieMap.version=cookie.version cookieMap.name=cookie.name cookieMap.value=cookie.value cookieMap.domain=cookie.domain cookieMap.path=cookie.path cookieMap.expiryDate=cookie.expiryDate cookieMaps.add(cookieMap) } file.withObjectOutputStream { oos-> oos.writeObject(cookieMaps) } } /** * Process Javascript using HTMLUnit (Simple Version) */ def processJavascript() { logger.finer "HTTPBuilder.processJavascript (Simple)" def webClient=new WebClient() def tempFile=File.createTempFile("HTMLUnit","") tempFile<<text def page=webClient.getPage("file://"+tempFile.toString()) webClient.waitForBackgroundJavaScript(10000) text=page.asXml() webClient.closeAllWindows() tempFile.delete() saveText() } /** * Process Javascript using HTMLUnit (Complex Version) * Closure, if specified, used to determine presence of necessary elements */ def processJavascript(closure) { logger.finer "HTTPBuilder.processJavascript (Complex)" // Convert response headers def headers=new ArrayList() resp.allHeaders.each() { header-> headers.add(new NameValuePair(header.name,header.value)) } def responseData=new WebResponseData(text.bytes,resp.statusLine.statusCode,resp.statusLine.toString(),headers) def response=new WebResponseImpl(responseData,uri.toURL(),method,0) // Transfer cookies def webClient=new WebClient() httpBuilder.client.cookieStore.getCookies().each { cookie-> webClient.cookieManager.addCookie(new Cookie(cookie.domain,cookie.name,cookie.value,cookie.path,cookie.expiryDate,cookie.isSecure())) } def page=webClient.loadWebResponseInto(response,webClient.getCurrentWindow()) // Wait for condition if (closure) { for (i in 1..20) { if (closure(page)) { break; } synchronized(page) { page.wait(500); } } } // Return text text=page.asXml() webClient.closeAllWindows() saveText() } } Allows one to interface HTTPBuilder with HTMLUnit! Enjoy Misha

    Read the article

  • Proper updating of GeoClipMaps

    - by thr
    I have been working on an implementation of gpu-based geo clip maps, but there is a section of the GPU Gems 2 article that i just can't seem to understand, specifically this paragraph and more precisely the bolded part: The choice of grid size n = 2k-1 has the further advantage that the finer level is never exactly centered with respect to its parent next-coarser level. In other words, it is always offset by 1 grid unit either left or right, as well as either top or bottom (see Figure 2-4), depending on the position of the viewpoint. In fact, it is necessary to allow a finer level to shift while its next-coarser level stays fixed, and therefore the finer level must sometimes be off-center with respect to the next-coarser level. An alternative choice of grid size, such as n = 2k-3, would provide the possibility for exact centering Let's take an example image from the article: My "understanding" of the way the clip maps were update was that you floor the position of the viewpoint to an int, and such get the center vertex point if this is not the same as the previous center point, you update the entire map. Now, this obviously is not the case - but what I am failing to understand is this: If you look at the image above, if the viewpoint was to move one unit to the right, then the inner ring (the one just around the view point + white center square) would end up getting a 1 unit space on both the left and right side of itself. But there is nothing in the paper that deals with this, what i mean is that it would end up looking like this (excuse my crummy cut-and-paste editing of the above image): This is obviously not a valid state of the. So, would the solution be that a clip ring (layer) can only move in increments of the ring/layer it's contained within? Wouldn't this end up being very restrictive? I feel like I am missing some crucial understanding of parts of the algorithm, but I have been over both this paper and the original paper from 2004 and I just can't see what I am not getting.

    Read the article

  • Oracle JDBC connection exception in Solaris but not Windows?

    - by lupefiasco
    I have some Java code that connects to an Oracle database using DriverManager.getConnection(). It works just fine on my Windows XP machine. However, when running the same code on a Solaris machine, I get the following exception. Both machines can reach the database machine on the network. I have included the Oracle trace logs. Mar 23, 2010 12:12:33 PM org.apache.commons.configuration.ConfigurationUtils locate FINE: ConfigurationUtils.locate(): base is /users/theUser/ADCompare, name is props.txt Mar 23, 2010 12:12:33 PM org.apache.commons.configuration.ConfigurationUtils locate FINE: Loading configuration from the path /users/theUser/ADCompare/props.txt Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver connect FINE: OracleDriver.connect(url=jdbc:oracle:thin:@//theServer:1521/theService, info) Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver connect FINER: OracleDriver.connect() walletLocation:(null) Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver parseUrl FINER: OracleDriver.parseUrl(url=jdbc:oracle:thin:@//theServer:1521/theService) Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver parseUrl FINER: sub_sub_index=12, end=46, next_colon_index=16, user=17, slash=18, at_sign=17 Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver parseUrl FINER: OracleDriver.parseUrl(url):return Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver connect FINER: user=theUser, password=******, database=//theServer:1521/theService, protocol=thin, prefetch=null, batch=null, accumulate batch result =true, remarks=null, synonyms=null Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.PhysicalConnection <init> FINE: PhysicalConnection.PhysicalConnection(ur="jdbc:oracle:thin:@//theServer:1521/theService", us="theUser", p="******", db="//theServer:1521/theService", info) Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.PhysicalConnection <init> FINEST: PhysicalConnection.PhysicalConnection() : connectionProperties={user=theUser, password=******, protocol=thin} Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.PhysicalConnection initialize FINE: PhysicalConnection.initialize(ur="jdbc:oracle:thin:@//theServer:1521/theService", us="theUser", access) Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.PhysicalConnection initialize FINE: PhysicalConnection.initialize(ur, us):return Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.PhysicalConnection needLine FINE: PhysicalConnection.needLine()--no return java.lang.ArrayIndexOutOfBoundsException: 31 at oracle.net.nl.NVTokens.parseTokens(Unknown Source) at oracle.net.nl.NVFactory.createNVPair(Unknown Source) at oracle.net.nl.NLParamParser.addNLPListElement(Unknown Source) at oracle.net.nl.NLParamParser.initializeNlpa(Unknown Source) at oracle.net.nl.NLParamParser.<init>(Unknown Source) at oracle.net.resolver.TNSNamesNamingAdapter.loadFile(Unknown Source) at oracle.net.resolver.TNSNamesNamingAdapter.checkAndReload(Unknown Source) at oracle.net.resolver.TNSNamesNamingAdapter.resolve(Unknown Source) at oracle.net.resolver.NameResolver.resolveName(Unknown Source) at oracle.net.resolver.AddrResolution.resolveAndExecute(Unknown Source) at oracle.net.ns.NSProtocol.establishConnection(Unknown Source) at oracle.net.ns.NSProtocol.connect(Unknown Source) at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1037) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:282) at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:468) at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:165) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:35) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:839) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:185) The above exception is also thrown if I use OracleDataSource instead of the generic DriverManager.getConnection(). Any ideas on why the behavior is different in the different environments?

    Read the article

  • How to layer views

    - by Finer Recliner
    I have a custom-made view that extends the View class. I would like 2 instances of my custom view layered directly on top of each other. How should my layout file look to achieve this?

    Read the article

  • Java's Object.wait method with nanoseconds: Is this a joke or am I missing something

    - by Krumia
    I was checking out the Java API source code (Java 8) just out of curiosity. And I found this in java/lang/Object.java. There are three methods named wait: public final native void wait(long timeout): This is the core of all wait methods, which has a native implementation. public final void wait(): Just calls wait(0). And then there is public final void wait(long timeout, int nanos). The JavaDoc for the particular method tells me that, This method is similar to the wait method of one argument, but it allows finer control over the amount of time to wait for a notification before giving up. The amount of real time, measured in nanoseconds, is given by: 1000000*timeout+nanos But this is how the methods achieves "finer control over the amount of time to wait": if (nanos >= 500000 || (nanos != 0 && timeout == 0)) { timeout++; } wait(timeout); So this method basically does a crude rounding up of nanoseconds to milliseconds. Not to mention that anything below 500000ns/0.5ms will be ignored. Is this piece of code bad/unnecessary code, or am I missing some unseen virtue of declaring this method, and it's no argument cousin as the way they are?

    Read the article

  • 'Photo editor' and 'RAW editor' in Shotwell

    - by Chris Wilson
    The preference menu in Shotwell allows the user to specify both an 'External photo editor' and an 'External RAW editor', but I'm confused as to why two external editors would be required. I'm not a photographer, so this confusion may simply be a result of my ignorance, but I thought RAW images were unprocessed photographs, in which case two editors would be kinda redundant. Am I simply missing one of the finer details of photograph processing?

    Read the article

  • Full-Scale Star Trek Props Recreated in LEGO

    - by Jason Fitzpatrick
    Is there a finer way to immortalize your love for one geeky thing than building it with another geeky thing? Flickr user Geeky Tom lives up to his name with these well done 1:1 Star Trek props crafted from LEGO bricks. Hit up the link below to check out the full gallery of phasers, tricorders, and more. LEGO Star Trek TOS [via Neatorama] How to Play Classic Arcade Games On Your PC How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8

    Read the article

  • Toshiba C55D Touchscreen..

    - by user177189
    Getting a new Toshiba C55D Touchscreen Laptop next week and want to run strictly Ubuntu and Firefox. Want to rip every scrap of Microsoft off of it (it comes pre-loaded). I'm a neophyte to Ubuntu and the finer points of setting up a laptop. Do I first install the latest Ubuntu release, plus Adobe, gmail, etc., then go about tearing the MS junk out?If anyone knows the ansswer, I certainly would appreciate some info on this. Toivo42.

    Read the article

  • Mounting 2.5" SSD in Antec Atlas 550's 3.5" bay

    - by cecilkorik
    Just got some new SSDs to add to my servers, unfortunately there doesn't appear to be anywhere to actually mount them. The cases are Antec Atlas 550 mid-tower server cases. The problem is that the 3.5" bays are intended for 3.5" hard drives (obviously) not SSDs, so they are mounted from the bottom with rubber grommets to reduce vibration. There are NO side-holes in any of the 3.5" bays, all mounting has to be done from the bottom of the drive, through the thick rubber grommets, which requires special extra-long screws with large heads. Those screws will not work with the SSD, the thread is too coarse, 2.5" drives apparently use M3 screws with a finer pitch. The case's 3.5" bays have holes aligned for 2.5" drives and by moving the grommets into the appropriate holes they line up with the SSD. But that's as far as I can get. I don't have and cannot find any screws that use the finer M3 pitch that are long enough and have the wide head. I can't remove the grommets because the screw holes are much too wide without them, the entire head of the screw fits through. And again, there are no side-holes, so I can't use the mounting bracket that came with the drive nor any other 2.5"-3.5" bracket I've found. Here is a pic of what I am dealing with (note that the SSD is just resting on the case, it's not currently screwed in if that's not clear) Any ideas for safely mounting these little guys without resorting to duct tape or superglue would be very appreciated. If anyone knows where I could buy the appropriate type of M3 threaded screw or has any other ideas please help me out here.

    Read the article

  • HTTP Builder/Groovy - lost 302 (redirect) handling?

    - by Misha Koshelev
    Dear All: I am reading here http://groovy.codehaus.org/modules/http-builder/doc/handlers.html "In cases where a response sends a redirect status code, this is handled internally by Apache HttpClient, which by default will simply follow the redirect by re-sending the request to the new URL. You do not need to do anything special in order to follow 302 responses." This seems to work fine when I simply use the get() or post() methods without a closure. However, when I use a closure, I seem to lose 302 handling. Is there some way I can handle this myself? Thank you p.s. Here is my log output showing it is a 302 response [java] FINER: resp.statusLine: "HTTP/1.1 302 Found" Here is the relevant code: // Copyright (C) 2010 Misha Koshelev. All Rights Reserved. package com.mksoft.fbbday.main import groovyx.net.http.ContentType import java.util.logging.Level import java.util.logging.Logger class HTTPBuilder { def dataDirectory HTTPBuilder(dataDirectory) { this.dataDirectory=dataDirectory } // Main logic def logger=Logger.getLogger(this.class.name) def closure={resp,reader-> logger.finer("resp.statusLine: \"${resp.statusLine}\"") if (logger.isLoggable(Level.FINEST)) { def respHeadersString='Headers:'; resp.headers.each() { header->respHeadersString+="\n\t${header.name}=\"${header.value}\"" } logger.finest(respHeadersString) } def text=reader.text def lastHtml=new File("${dataDirectory}${File.separator}last.html") if (lastHtml.exists()) { lastHtml.delete() } lastHtml<<text new XmlSlurper(new org.cyberneko.html.parsers.SAXParser()).parseText(text) } def processArgs(args) { if (logger.isLoggable(Level.FINER)) { def argsString='Args:'; args.each() { arg->argsString+="\n\t${arg.key}=\"${arg.value}\"" } logger.finer(argsString) } args.contentType=groovyx.net.http.ContentType.TEXT args } // HTTPBuilder methods def httpBuilder=new groovyx.net.http.HTTPBuilder () def get(args) { httpBuilder.get(processArgs(args),closure) } def post(args) { args.contentType=groovyx.net.http.ContentType.TEXT httpBuilder.post(processArgs(args),closure) } } Here is a specific tester: #!/usr/bin/env groovy import groovyx.net.http.HTTPBuilder import groovyx.net.http.Method import static groovyx.net.http.ContentType.URLENC import java.util.logging.ConsoleHandler import java.util.logging.Level import java.util.logging.Logger // MUST ENTER VALID FACEBOOK EMAIL AND PASSWORD BELOW !!! def email='' def pass='' // Remove default loggers def logger=Logger.getLogger('') def handlers=logger.handlers handlers.each() { handler->logger.removeHandler(handler) } // Log ALL to Console logger.setLevel Level.ALL def consoleHandler=new ConsoleHandler() consoleHandler.setLevel Level.ALL logger.addHandler(consoleHandler) // Facebook - need to get main page to capture cookies def http = new HTTPBuilder() http.get(uri:'http://www.facebook.com') // Login def html=http.post(uri:'https://login.facebook.com/login.php?login_attempt=1',body:[email:email,pass:pass]) assert html==null // Why null? html=http.post(uri:'https://login.facebook.com/login.php?login_attempt=1',body:[email:email,pass:pass]) { resp,reader-> assert resp.statusLine.statusCode==302 // Shouldn't we be redirected??? // http://groovy.codehaus.org/modules/http-builder/doc/handlers.html // "In cases where a response sends a redirect status code, this is handled internally by Apache HttpClient, which by default will simply follow the redirect by re-sending the request to the new URL. You do not need to do anything special in order to follow 302 responses. " } Here are relevant logs: FINE: Receiving response: HTTP/1.1 302 Found Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << HTTP/1.1 302 Found Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Cache-Control: private, no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Expires: Sat, 01 Jan 2000 00:00:00 GMT Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Location: http://www.facebook.com/home.php? Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << P3P: CP="DSP LAW" Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Pragma: no-cache Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Set-Cookie: datr=1275687438-9ff6ae60a89d444d0fd9917abf56e085d370277a6e9ed50c1ba79; expires=Sun, 03-Jun-2012 21:37:24 GMT; path=/; domain=.facebook.com Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Set-Cookie: lxe=koshelev%40post.harvard.edu; expires=Tue, 28-Sep-2010 15:24:04 GMT; path=/; domain=.facebook.com; httponly Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Set-Cookie: lxr=deleted; expires=Thu, 04-Jun-2009 21:37:23 GMT; path=/; domain=.facebook.com; httponly Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Set-Cookie: pk=183883c0a9afab1608e95d59164cc7dd; path=/; domain=.facebook.com; httponly Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Content-Type: text/html; charset=utf-8 Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << X-Cnection: close Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Date: Fri, 04 Jun 2010 21:37:24 GMT Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Content-Length: 0 Jun 4, 2010 4:37:22 PM org.apache.http.client.protocol.ResponseProcessCookies processCookies FINE: Cookie accepted: "[version: 0][name: datr][value: 1275687438-9ff6ae60a89d444d0fd9917abf56e085d370277a6e9ed50c1ba79][domain: .facebook.com][path: /][expiry: Sun Jun 03 16:37:24 CDT 2012]". Jun 4, 2010 4:37:22 PM org.apache.http.client.protocol.ResponseProcessCookies processCookies FINE: Cookie accepted: "[version: 0][name: lxe][value: koshelev%40post.harvard.edu][domain: .facebook.com][path: /][expiry: Tue Sep 28 10:24:04 CDT 2010]". Jun 4, 2010 4:37:22 PM org.apache.http.client.protocol.ResponseProcessCookies processCookies FINE: Cookie accepted: "[version: 0][name: lxr][value: deleted][domain: .facebook.com][path: /][expiry: Thu Jun 04 16:37:23 CDT 2009]". Jun 4, 2010 4:37:22 PM org.apache.http.client.protocol.ResponseProcessCookies processCookies FINE: Cookie accepted: "[version: 0][name: pk][value: 183883c0a9afab1608e95d59164cc7dd][domain: .facebook.com][path: /][expiry: null]". Jun 4, 2010 4:37:22 PM org.apache.http.impl.client.DefaultRequestDirector execute FINE: Connection can be kept alive indefinitely Jun 4, 2010 4:37:22 PM groovyx.net.http.HTTPBuilder doRequest FINE: Response code: 302; found handler: post302$_run_closure2@7023d08b Jun 4, 2010 4:37:22 PM groovyx.net.http.HTTPBuilder doRequest FINEST: response handler result: null Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.SingleClientConnManager releaseConnection FINE: Releasing connection org.apache.http.impl.conn.SingleClientConnManager$ConnAdapter@605b28c9 You can see there is clearly a location argument. Thank you Misha

    Read the article

  • Model login constraints based on time

    - by DaDaDom
    Good morning, for an existing web application I need to implement "time based login constraints". It means that for each user, later maybe each group, I can define timeslots when they are (not) allowed to log in into the system. As all data for the application is stored in database tables, I need to somehow create a way to model this idea in that way. My first approach, I will try to explain it here: Create a tree of login constraints (called "timeslots") with the main "categories", like "workday", "weekend", "public holiday", etc. on the top level, which are in a "sorted" order (meaning "public holiday" has a higher priority than "weekday") for each top level node create subnodes, which have a finer timespan, like "monday", "tuesday", ... below that, create an "hour" level: 0, 1, 2, ..., 23. No further details are necessary. set every member to "allowed" by default For every member of the system create a 1:n relationship member:timeslots which defines constraints, e.g. a member A may have A:monday-forbidden and A:tuesday-forbidden Do a depth-first search at every login and check if the member has a constraint. Why a depth first search? Well, I thought that it may be that a member has the rules: A:monday->forbidden, A:monday-10->allowed, A:mondey-11->allowed So a login on monday at 12:30 would fail, but one at 10:30 succeed. For performance reasons I could break the relational database paradigm and set a flag for every entry in the member-to-timeslots-table which is set to true if the member has information set for "finer" timeslots, but that's a second step. Is this model in principle a good idea? Are there existing models? Thanks.

    Read the article

  • Applying fine-grained security to an existing application

    - by Mark
    I've inherited a reasonably large and complex ASP.NET MVC3 web application using EF Code First on SQL Server. It uses ASP.NET Membership roles with database authentication. The controller actions are secured with attributes derived from AuthorizeAttribute that map roles to actions. There are extension methods for the finer points, such as showing a particular widget to particular roles. This is works great and I have a good understanding of the current security model. I've been asked to provide finer grained security at the data level. For example a 'Customer' user can only see data (throughout the database) associated with themselves and not other Customers. The problem is that 'Customer' is only 1 of 5 different types with their own specific restrictions (each of the 9 roles is one of these 5 types). The best thing I can think of is to go through all the data repositories and extend each and every LINQ statements/query with a filter for every user type. Even if I had time for that it doesn't seem like the most elegant way. Any suggestions? I really don't know where to start with this so anything could be helpful. Many thanks.

    Read the article

  • How does TDD address interaction between objects?

    - by Gigi
    TDD proponents claim that it results in better design and decoupled objects. I can understand that writing tests first enforces the use of things like dependency injection, resulting in loosely coupled objects. However, TDD is based on unit tests - which test individual methods and not the integration between objects. And yet, TDD expects design to evolve from the tests themselves. So how can TDD possibly result in a better design at the integration (i.e. inter-object) level when the granularity it addresses is finer than that (individual methods)?

    Read the article

  • Kinetic Scroll in Maverick

    - by Shoaib
    I have Sony VAIO FW 230J laptop. I was using Lucid and upgraded my system to Maverick beta in September 2010. Although there were few issues with beta release on my system although I did not felt myself in any deadlock. The most notable experience on my ubuntu was the smooth and kinetic scrolling and very sensitive touch pad experience with finer control that was oddly normal in previous releases and even on windows 7 currently. Definitely this release of ubuntu is improved for touch UI experience. I believe that this is magic of 10.10. For my official system I waited until the final release was out. After installing (upgrading) official system to Maverick I am not feeling the same smooth and kinetic scroll experience. My laptop has Graphics: Intel DMA-X4500MHD while my official desktop has Graphics: Intel DMA-X4500HD Do I need to install or update some software package to have similar kinetic scroll experience?

    Read the article

  • Hyper-V for Developers - presentation from NxtGenUG Oxford (including link to more info on Dynamic M

    - by Liam Westley
    Many thanks to Richard Hopton and the NxtGenUG guys in Oxford for inviting me to talk on Hyper-V for Developers last night, and for Research Machines for providing the venue.  It was great to have developers not yet using Hyper-V who were really interested in some of the finer points to help them with specific requirements. For those wanting to follow up on the topics I covered, you can download the presentation deck as either PDF (with speaker notes included) or as the original PowerPoint slidedeck,   http://www.tigernews.co.uk/blog-twickers/nxtgenugoxford/HyperV4Devs.pdf   http://www.tigernews.co.uk/blog-twickers/nxtgenugoxford/HyperV4Devs.zip I also mentioned the new feature, Dynamic Memory, coming in Service Pack 1, had been presented in a session at TechEd 2010 by Ben Armstrong, and you can download his presentation from here,   http://blogs.msdn.com/b/virtual_pc_guy/archive/2010/06/08/talking-about-dynamic-memory.aspx

    Read the article

  • Explicit resource loading in Ogre (Mogre)

    - by sebf
    I am just starting to learn Mogre and what I would like to do is to be able to load resources 'explicitly' (i.e. I just provide an absolute path instead of using a resource group tied to a directory). This is very different to manually loading resources, which I believe in Ogre has a very specific meaning, to build up the object using Ogres methods. I want to use Ogres resource management system/resource loading code, but to have finer control over which files are loaded and in what groups they are. I remember reading how to do this but cannot find the page again; I think its possible to do something like: Declare a resource group Declare the resource(s) (this is when the actual resource file name is provided) Initialise the resource group to actually load the resource(s) Is this the correct procedure? If so, is there any example code showing how to do this?

    Read the article

  • Memory Management/Embedded Management in C

    - by Sauron
    Im wondering if there is a set or a few good books/Tutorials/Etc.. that go into Memory Management/Allocation Specifically (or at least have a good dedicated section to it) when it comes to C. This is more for me learning Embedded and trying to keep Size down. I've read and Learned C fine, and the "standard" Learning books. However most of the books don't spend a huge amount of time (Understandably since C is pretty huge in general) going into the Finer details about whats going on Down Under. I saw a few on Amazon: http://www.amazon.com/C-Pointers-Dynamic-Memory-Management/dp/0471561525 http://www.amazon.com/Understanding-Pointers-C-Yashavant-Kanetkar/dp/8176563587/ref=pd_sim_b_1 (Not sure how relevant this would be) A specific Book for Embedded that has to do with this would be nice. But Code Samples or...Heck tutorials or anything about this topic would be helpful!

    Read the article

  • Will backup using rsync preserve ACLs?

    - by Khaled
    I am using backuppc to backup my server. The backups are done using rsyncd. Currently, I am not using ACLs, but I am think it is good to activate it to have finer control over permissions. My question: Will backing up my files using rsync preserve the defined ACLs? BTW, I read an article about ACLs. They are saying that ubuntu does not support ACLs with tar. Is this real/old or not? I may not have this problem if I am using rsync. Is this right?

    Read the article

  • Update Manager error 12.04

    - by Shannon Marisa
    I'm using Ubuntu 12.04. When attempting to run my Update Manager, it doesn't allow any of the updates to be loaded. Under "details" all it says is "patch". It won't even load anything when the Partial Upgrade option comes up. I thought that this Error in Update Manager would help me, but when I followed the instruction and looked for the "PUBKEY" in Terminal, there was none, it said "BADSIG" and "failed to fetch 11.10" instead and my Update Manager still doesn't update anything. I'm still new to the finer points of Ubuntu and could really use some advice. Did I do something wrong? Or I need to do a fresh install?

    Read the article

  • When designing a job queue, what should determine the scope of a job?

    - by Stuart Pegg
    We've got a job queue system that'll cheerfully process any kind of job given to it. We intend to use it to process jobs that each contain 2 tasks: Job (Pass information from one server to another) Fetch task (get the data, slowly) Send task (send the data, comparatively quickly) The difficulty we're having is that we don't know whether to break the tasks into separate jobs, or process the job in one go. Are there any best practices or useful references on this subject? Is there some obvious benefit to a method that we're missing? So far we can see these benefits for each method: Split Job lease length reflects job length: Rather than total of two Finer granularity on recovery: If we lose outgoing connectivity we can tell them all to retry The starting state of the second task is saved to job history: Helps with debugging (although similar logging could be added in single task method) Single Single job to be scheduled: Less processing overhead Data not stale on recovery: If the outgoing downtime is quite long, the pending Send jobs could be outdated

    Read the article

1 2 3 4  | Next Page >