Search Results

Search found 310 results on 13 pages for 'joseph garvin'.

Page 3/13 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How do you implement Software Transactional Memory?

    - by Joseph Garvin
    In terms of actual low level atomic instructions and memory fences (I assume they're used), how do you implement STM? The part that's mysterious to me is that given some arbitrary chunk of code, you need a way to go back afterward and determine if the values used in each step were valid. How do you do that, and how do you do it efficiently? This would also seem to suggest that just like any other 'locking' solution you want to keep your critical sections as small as possible (to decrease the probability of a conflict), am I right? Also, can STM simply detect "another thread entered this area while the computation was executing, therefore the computation is invalid" or can it actually detect whether clobbered values were used (and thus by luck sometimes two threads may execute the same critical section simultaneously without need for rollback)?

    Read the article

  • How can I use functools.partial on multiple methods on an object, and freeze parameters out of order

    - by Joseph Garvin
    I find functools.partial to be extremely useful, but I would like to be able to freeze arguments out of order (the argument you want to freeze is not always the first one) and I'd like to be able to apply it to several methods on a class at once, to make a proxy object that has the same methods as the underlying object except with some of its methods parameter being frozen (think of it as generalizing partial to apply to classes). I've managed to scrap together a version of functools.partial called 'bind' that lets me specify parameters out of order by passing them by keyword argument. That part works: >>> def foo(x, y): ... print x, y ... >>> bar = bind(foo, y=3) >>> bar(2) 2 3 But my proxy class does not work, and I'm not sure why: >>> class Foo(object): ... def bar(self, x, y): ... print x, y ... >>> a = Foo() >>> b = PureProxy(a, bar=bind(Foo.bar, y=3)) >>> b.bar(2) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: bar() takes exactly 3 arguments (2 given) I'm probably doing this all sorts of wrong because I'm just going by what I've pieced together from random documentation, blogs, and running dir() on all the pieces. Suggestions both on how to make this work and better ways to implement it would be appreciated ;) One detail I'm unsure about is how this should all interact with descriptors. Code follows. from types import MethodType class PureProxy(object): def __init__(self, underlying, **substitutions): self.underlying = underlying for name in substitutions: subst_attr = substitutions[name] if hasattr(subst_attr, "underlying"): setattr(self, name, MethodType(subst_attr, self, PureProxy)) def __getattribute__(self, name): return getattr(object.__getattribute__(self, "underlying"), name) def bind(f, *args, **kwargs): """ Lets you freeze arguments of a function be certain values. Unlike functools.partial, you can freeze arguments by name, which has the bonus of letting you freeze them out of order. args will be treated just like partial, but kwargs will properly take into account if you are specifying a regular argument by name. """ argspec = inspect.getargspec(f) argdict = copy(kwargs) if hasattr(f, "im_func"): f = f.im_func args_idx = 0 for arg in argspec.args: if args_idx >= len(args): break argdict[arg] = args[args_idx] args_idx += 1 num_plugged = args_idx def new_func(*inner_args, **inner_kwargs): args_idx = 0 for arg in argspec.args[num_plugged:]: if arg in argdict: continue if args_idx >= len(inner_args): # We can't raise an error here because some remaining arguments # may have been passed in by keyword. break argdict[arg] = inner_args[args_idx] args_idx += 1 f(**dict(argdict, **inner_kwargs)) new_func.underlying = f return new_func

    Read the article

  • Is it possible to store pointers in shared memory without using offsets?

    - by Joseph Garvin
    When using shared memory, each process may mmap the shared region into a different area of their address space. This means that when storing pointers within the shared region, you need to store them as offsets of the start of the shared region. Unfortunately, this complicates use of atomic instructions (e.g. if you're trying to write a lock free algorithm). For example, say you have a bunch of reference counted nodes in shared memory, created by a single writer. The writer periodically atomically updates a pointer 'p' to point to a valid node with positive reference count. Readers want to atomically write to 'p' because it points to the beginning of a node (a struct) whose first element is a reference count. Since p always points to a valid node, incrementing the ref count is safe, and makes it safe to dereference 'p' and access other members. However, this all only works when everything is in the same address space. If the nodes and the 'p' pointer are stored in shared memory, then clients suffer a race condition: x = read p y = x + offset Increment refcount at y During step 2, p may change and x may no longer point to a valid node. The only workaround I can think of is somehow forcing all processes to agree on where to map the shared memory, so that real pointers rather than offsets can be stored in the mmap'd region. Is there any way to do that? I see MAP_FIXED in the mmap documentation, but I don't know how I could pick an address that would be safe.

    Read the article

  • Is it normal for C++ static initialization to appear twice in the same backtrace?

    - by Joseph Garvin
    I'm trying to debug a C++ program compiled with GCC that freezes at startup. GCC mutex protects function's static local variables, and it appears that waiting to acquire such a lock is why it freezes. How this happens is rather confusing. First module A's static initialization occurs (there are __static_init functions GCC invokes that are visible in the backtrace), which calls a function Foo(), that has a static local variable. The static local variable is an object who's constructor calls through several layers of functions, then suddenly the backtrace has a few ??'s, and then it's is in the static initialization of a second module B (the __static functions occur all over again), which then calls Foo(), but since Foo() never returned the first time the mutex on the local static variable is still set, and it locks. How can one static init trigger another? My first theory was shared libraries -- that module A would be calling some function in module B that would cause module B to load, thus triggering B's static init, but that doesn't appear to be the case. Module A doesn't use module B at all. So I have a second (and horrifying) guess. Say that: Module A uses some templated function or a function in a templated class, e.g. foo<int>::bar() Module B also uses foo<int>::bar() Module A doesn't depend on module B at all At link time, the linker has two instances of foo<int>::bar(), but this is OK because template functions are marked as weak symbols... At runtime, module A calls foo<int>::bar, and the static init of module B is triggered, even though module B doesn't depend on module A! Why? Because the linker decided to go with module B's instance of foo::bar instead of module A's instance at link time. Is this particular scenario valid? Or should one module's static init never trigger static init in another module?

    Read the article

  • Wait on multiple condition variables on Linux without unnecessary sleeps?

    - by Joseph Garvin
    I'm writing a latency sensitive app that in effect wants to wait on multiple condition variables at once. I've read before of several ways to get this functionality on Linux (apparently this is builtin on Windows), but none of them seem suitable for my app. The methods I know of are: Have one thread wait on each of the condition variables you want to wait on, which when woken will signal a single condition variable which you wait on instead. Cycling through multiple condition variables with a timed wait. Writing dummy bytes to files or pipes instead, and polling on those. #1 & #2 are unsuitable because they cause unnecessary sleeping. With #1, you have to wait for the dummy thread to wake up, then signal the real thread, then for the real thread to wake up, instead of the real thread just waking up to begin with -- the extra scheduler quantum spent on this actually matters for my app, and I'd prefer not to have to use a full fledged RTOS. #2 is even worse, you potentially spend N * timeout time asleep, or your timeout will be 0 in which case you never sleep (endlessly burning CPU and starving other threads is also bad). For #3, pipes are problematic because if the thread being 'signaled' is busy or even crashes (I'm in fact dealing with separate process rather than threads -- the mutexes and conditions would be stored in shared memory), then the writing thread will be stuck because the pipe's buffer will be full, as will any other clients. Files are problematic because you'd be growing it endlessly the longer the app ran. Is there a better way to do this? Curious for answers appropriate for Solaris as well.

    Read the article

  • Why can't I pass self as a named argument to an instance method in Python?

    - by Joseph Garvin
    This works: >>> def bar(x, y): ... print x, y ... >>> bar(y=3, x=1) 1 3 And this works: >>> class foo(object): ... def bar(self, x, y): ... print x, y ... >>> z = foo() >>> z.bar(y=3, x=1) 1 3 And even this works: >>> foo.bar(z, y=3, x=1) 1 3 But why doesn't this work? >>> foo.bar(self=z, y=3, x=1) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unbound method bar() must be called with foo instance as first argument (got nothing instead) This makes metaprogramming more difficult, because it requires special case handling. I'm curious if it's somehow necessary by Python's semantics or just an artifact of implementation.

    Read the article

  • Extended maxima transform in Matlab

    - by garvin
    I use imtophat to apply a filter to an m x n array. I then find the local max using imextendedmax(). I get mostly 0's everywhere except for 1's in the general areas where I am expecting a local max. The weird thing is, though, that I don't get just one local max. Instead in these places I get MANY elements with 1's such as 00011100000 00111111000 00000110000 yet the values there are close but NOT equal so I would expect that there would be one that is higher than all of the rest. So I'm wondering a) if this is a bug and how I might fix it and b) how you would choose choose the element of these 1's with the highest value.

    Read the article

  • Cisco ASA Hairpinning with Dynamic IP

    - by Joseph Sturtevant
    I currently have my Cisco ASA 5505 firewall configured to forward port 80 from the outside interface to a host on my dmz interface. I also need to allow clients on my inside interface to access the host in the dmz by entering the public ip / dns record in their browsers. I was able to do that by following the instructions here, resulting in the following configuration: static (dmz,outside) tcp interface www 192.168.1.5 www netmask 255.255.255.255 static (dmz,inside) tcp 74.125.45.100 www 192.168.1.5 www netmask 255.255.255.255 (Where 74.125.45.100 is my public IP and 192.168.1.5 is the IP of the dmz host) This works great except for the fact that my network has a dynamic public IP and this configuration will therefore break as soon as my public IP changes. Is there a way to do what I want with a dynamic ip? Note: Adding an internal DNS record won't solve my problem since I have multiple dmz hosts mapped to different ports on the public IP.

    Read the article

  • how to fix "BusyBox v1.17.1 (Ubuntu 1:1.17.1-10ubuntu1) built-in shell (ash) Enter 'help' for a list of built-in commands?"

    - by Joseph
    So I was using Ubuntu when suddenly the whole thing froze up and I had to reboot. And from that moment on, the system when it is starting up, prompts this little selection menu: GNU GRUB version 1.99~rc1-13ubuntu3 Ubuntu, with Linux 2.6.38-10-generic ubuntu, with Linux 2.6.38-10-generic (recovery mode) Previous Linux versions Memory test (memtest86+) Memory test (memtest86+, serial console 115200) I have chosen all of the available choices but all I get is another command line system that reads: BusyBox v1.17.1 (Ubuntu 1:1.17.1-10ubuntu1) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs): And honestly I can't do anything with it. Does anyone have any idea of what is going on and how I can get Ubuntu to work again?

    Read the article

  • Setting up Github post-receive webhook with private Jenkins and private repo

    - by Joseph S.
    I'm trying to set up a private GitHub project to send a post-receive request to a private Jenkins instance to trigger a project build on branch push. Using latest Jenkins with the GitHub plugin. I believe I set up everything correctly on the Jenkins side because when sending a request from a public server with curl like this: curl http://username:password@ipaddress:port/github-webhook/ results in: Stacktrace: net.sf.json.JSONException: null object which is fine because the JSON payload is missing. Sending the wrong username and password in the URI results in: Exception: Failed to login as username I interpret this as a correct Jenkins configuration. Both of these requests also result in entries in the Jenkins log. However, when pasting the exact same URI from above into the Github repository Post-Receive URLs Service Hook and clicking on Test Hook, absolutely nothing seems to happen on my server. Nothing in the Jenkins log and the GitHub Hook Log in the Jenkins project says Polling has not run yet. I have run out of ideas and don't know how to proceed further.

    Read the article

  • Monitoring Bandwidth Usage (Per Internal IP) - Cisco ASA 5505

    - by Joseph Sturtevant
    I manage a small network with a Cisco ASA 5505 and a shared DSL connection. I would like to be able monitor the bandwidth usage of the various users/devices on my network (by IP). Can I do that using the ASA? Has anyone got this working? What is the best way to do this? Some Ideas I Have Seen Online: SNMP with a tool like Cacti Does this give per IP usage with an ASA or just overall usage? Netflow with a tool like ntop Couldn't get this to work. It seems that the Netflows sent by ASA are not exactly standard. Ntop receives them, but doesn't seem to know what do with them.

    Read the article

  • Ubuntu/Nvidia lists DVI dual cable as single

    - by Joseph Mastey
    I have an NVidia Quadro FX 880M graphics card, from which I am trying to drive 2 monitors: my internal laptop montior (15.6", 1920x1080, Nvidia driver says it's running via DisplayPort) and an external 27" monitor (Dell U2711, 2560x1440 native resolution, via DVI). I've hooked the dual DVI cable to the dual DVI port on my dock (Dell PR03X) and installed the proprietary NVidia driver, but I cannot seem to get the full 2560x1440 out of the larger 27" external monitor. Looking at the NVidia driver settings, the monitor's connection is reported as a single DVI cable, rather than a dual one, which would explain the reduced resolution. Does anyone have any experience with an issue like this? What can I do to make full use of my new monitor? (Possibly) Relevant Information: There is no DVI port on the laptop itself, but one is provided via the dock. The laptop and dock both provide a DisplayPort jack, but I have been unable to get this working on either w/ the monitor. I did have the nouveau driver installed when I installed the nvidia proprietary driver, but have since removed it (no change in the monitor situation when I removed it). The 27" reports a max resolution of 1680x1050. Thanks, Joe

    Read the article

  • Adding tables to a herd in bucardo

    - by Joseph the Dreamer
    Forgive my ignorance, I am a JS programmer given the task to do DB replication using bucardo. I understand the concept of how bucardo works, but setting it up is a bit confusing. The set-up is: Lubuntu Linux Two databases test_master and test_slave, using PostgreSQL Each DB has a table named test, containing 2 columns: id (PK) and test (int) I use pgAdmin3 I have already added them to bucardo's list of databases and added all tables. Table: public.test DB: test_slave PK: id (int4) Table: public.test DB: test_master PK: id (int4) As you see, due to the fact that the DBs are identical, even the schema names are identical. So when I do: bucardo_ctl add herd sample_herd public.test Ok, so it got added to the herd. But this command gets confused which database public.test comes from. So when I add a sync: $ bucardo_ctl add sync sample_sync source=sample_herd targetdb=test_slave type=fullcopy Failed to add sync: DBD::Pg::st execute failed: ERROR: Source and target databases cannot be the same: test_slave at line 118. at line 30. CONTEXT: PL/Perl function "validate_sync" at /usr/bin/bucardo_ctl line 3362. What does it mean that source and target cannot be the same? If it got confused as to which public.test to use as source, how do I differentiate?

    Read the article

  • Certain banking pages not loading

    - by Joseph Lee
    For some unknown reason, I am suddenly unable to access my accounts at several banking and credit sites. I have been a registered user at each site for several years and know I am using the correct user ID and password. Yet, after entering the data, answering security questions, and clicking the submit button, I land on a page with an error message saying their is a technical problem preventing me from accessing my account. On one site, I end up at the sign in page repeatedly. I am never told that my ID/password are incorrect. I believe may be firewall related. Windows firewall was damaged after a recent malware attack. I am now using a third party firewall (Fort Knox). I am not seeing a pop-up indicating sites are blocked or asking me to indicate yes or no. I am using Windows 7 Home Premium. I get the same result regardless of the browser. I switched to Maxthon last night and am getting the same result. This is not happening at other sites. And I am able to access some banking sites normally. This is frustrating because I need to make payments and have gone paperless. Any feedback will be appreciated. ---- Joe ----

    Read the article

  • Setting up multiple servers for one domain

    - by Joseph Torraca
    So I am starting up a new website and I was wondering how to set up 5 servers to host the site. I have already purchased 5 Apple XServes, one will be used as a test server and the other 4 will be for the live site. So I have read some website on the internet and they all reference using one server and installing software onto it and have that server do the load balancing. I have also read that you could use a hardware, rack-mounted system and plug the servers into that. The load balancer would then distribute the load. So I have a few questions about each: 1) How do you set up the software version and have the other servers as "slaves" and have one "master" to direct traffic? 2) Which of the two options above are more reliable, and better suited for a startup that doesn't have many users per month, yet(hopefully)? 3) Is there a theoretical max limit of servers that can be connected to a software load balancing system? Note: Obviously this will change from software to software, but in terms of the server being able to handle it? 4) In your own opinion, what are you using for your sites? Have you had any problems setting up that system or operating it once its running? Are there any things you would stay away from if you had to start over? 5) I also purchased a Apple RAID system, so if you are familiar with it, is there any way to connect it to multiple Xserves so they all serve the same data? I'm a little confused on this, so thanks for all your help and being patient with me. Note: Take it easy on me, I am learning this as I go along, so I may have used terms incorrectly or explained things that don't really make sense. Sorry. P.S. If you need me to supply the specs on the servers to determine which system makes the most sense, I can post them for you.

    Read the article

  • apache2: Could not reliably determine the server's fully qualified domain name

    - by Joseph Silvashy
    I've never encountered this error before. And secondly I'd like to know how you folks debug your apache configurations. apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName In my Virtual Host configuration I do have these lines: ServerName example.com ServerAlias www.example.com (of course it has my actual info in there) So I guess my question is, why wouldn’t apache be able to determine my fully qualified domain name?

    Read the article

  • Have a service start on startup with Ubuntu

    - by Joseph Silvashy
    I'm not clear on how to start a service when the server boots, I read on some of the other questions asked about adding the script to /etc/init.d, but It's just one line that I need to execute in the commandline: sudo /etc/init.d/avahi-daemon restart But I have a few issues with this, firstly, I apparently need to use sudo, and it gives me the following: ngl-server-01:~% sudo /etc/init.d/avahi-daemon start Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service avahi-daemon start Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the start(8) utility, e.g. start avahi-daemon But when I try just avahi-daemon start I get: Too many arguments Why is this? and how would you start this service?

    Read the article

  • Only show hidden files in certain directories

    - by Joseph Silvashy
    On a Unix system (or more specifically on OS X) is it possible to show hidden files in only some directories? For example as a developer I want to see the hidden files in my Rails projects but not on my desktop as well. I guess I'm just tired of seeing all these .DS_Store and .trashes files swimming around, any remedies not directly related are welcome too!

    Read the article

  • Throttle connections to web service if load gets too high?

    - by Joseph Turian
    I have a web site that communicates via XMLRPC with an XMLRPC server web service. (The web service is written in Python using xmlrpclib.) I believe that xmlrpclib will block while it is handling one request. So if there are three users with an xmlrpclib request ahead of you, your response takes four times as long. How do I handle it if I receive too many XMLRPC requests and the web service gets bogged down and has slow response time? If I am getting slashdotted, my preferred behavior is that the first users get good response times and everyone else is told to come back later. I think this is superior to giving everyone terrible response times. How do I create this behavior? Is this called load-balancing? I am not actually balancing though, until I have multiple servers.

    Read the article

  • Getting "-bash: fork: Resource temporarily unavailable" in OSX

    - by Joseph Tura
    I seem to run into problems with the max. number of processes every so often. Anyone know what is best practice for fixing this? Running OSX 10.6 on a MacBook Pro i7. ulimit -a returns these values: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited When the error occurred I checked, and there were 102 running tasks and 523 threads.

    Read the article

  • Insecure world writable dir

    - by Joseph Silvashy
    I can't figure out how to fix this, apparently ruby doesn't like anything in my home directory. /Users/Connor/.rvm/rubies/ree-1.8.7-2010.01/bin/gem:4: warning: Insecure world writable dir /Users/Connor/.rvm/rubies/ree-1.8.7-2010.01/bin in PATH, mode 040766 How can I fix this?

    Read the article

  • php5-mysqlnd on debian wheezy/sid?

    - by Joseph
    I am trying to install php5-mysqlnd on a fresh install of Wheezy (/etc/debian_version refers to it as wheezy/sid) and I'm having a problem: root@debian:/var/www/lottery1# apt-get install php5-mysqlnd Reading package lists... Done Building dependency tree Reading state information... Done php5-mysqlnd is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up php5-mysqlnd (5.4.0-3) ... ucfr: Attempt from package php5-mysqlnd to take /etc/php5/mods-available/mysql.ini away from package php5-mysql ucfr: Aborting. dpkg: error processing php5-mysqlnd (--configure): subprocess installed post-installation script returned error exit status 4 Processing triggers for libapache2-mod-php5 ... configured to not write apport reports Reloading web server config: apache2. Errors were encountered while processing: php5-mysqlnd E: Sub-process /usr/bin/dpkg returned an error code (1) It seems there is some sort of conflict with the php5-mysql package, but I still get this error even after removing (with --purge) the php5-mysql package. Any thoughts? I'm trying to run a web tool that makes heavy use of mysqli_result::fetch_all(). Thanks!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >