Search Results

Search found 16168 results on 647 pages for 'shared state'.

Page 93/647 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • How to control the state of the buttons to be in present based on an event happened in a Silverlight app?

    - by vladc77
    I am trying to avoid building two buttons when I really need one. In my Silverlight app scenarios, I have few grids with a different content and buttons that control the visibility of these grids. I need to be able to show a different visual for a button when its content grid is visible. I can control states such as MouseOver and Pressed and more with visual state manage. However, I am not sure how to achieve this functionality with. I also can place an image on top of the button and switch the visibility of both but it is not perfect for what I need. I am wondering if there is any way to achieve this behavior. Any ideas are highly appreciated!

    Read the article

  • Initialise a wix CheckBox's check state based on a property?

    - by MauriceL
    How does one initalise a Wix check box based on the value of a property? So far, I've done the following: <Control Id="Checkbox" Type="CheckBox" X="0" Y="0" Width="100" Height="15" Property="CHECKBOX_SELECTION" Text="I want this feature" CheckBoxValue="1" TabSkip="no"> <Condition Action="hide">HIDE_CHECKBOX</Condition> <Condition Action="show">NOT HIDE_CHECKBOX</Condition> </Control> Currently I have two custom actions to set HIDE_CHECKBOX and CHECKBOX_SELECTION. The CHECKBOX_SELECTION custom action occurs immediately after the HIDE_CHECKBOX action. What I'm seeing is that HIDE_CHECKBOX is behaving correctly (ie. the checkbox is hidden) which suggests that I've got the ordering of custom actions correct, but CHECKBOX_SELECTION is not changing the check state of the check box. Is this a safe assumption? Also, I've confirmed that SELECTION is being set to '1' in the logs.

    Read the article

  • HTG Explains: Why Does Rebooting a Computer Fix So Many Problems?

    - by Chris Hoffman
    Ask a geek how to fix a problem you’ve having with your Windows computer and they’ll likely ask “Have you tried rebooting it?” This seems like a flippant response, but rebooting a computer can actually solve many problems. So what’s going on here? Why does resetting a device or restarting a program fix so many problems? And why don’t geeks try to identify and fix problems rather than use the blunt hammer of “reset it”? This Isn’t Just About Windows Bear in mind that this soltion isn’t just limited to Windows computers, but applies to all types of computing devices. You’ll find the advice “try resetting it” applied to wireless routers, iPads, Android phones, and more. This same advice even applies to software — is Firefox acting slow and consuming a lot of memory? Try closing it and reopening it! Some Problems Require a Restart To illustrate why rebooting can fix so many problems, let’s take a look at the ultimate software problem a Windows computer can face: Windows halts, showing a blue screen of death. The blue screen was caused by a low-level error, likely a problem with a hardware driver or a hardware malfunction. Windows reaches a state where it doesn’t know how to recover, so it halts, shows a blue-screen of death, gathers information about the problem, and automatically restarts the computer for you . This restart fixes the blue screen of death. Windows has gotten better at dealing with errors — for example, if your graphics driver crashes, Windows XP would have frozen. In Windows Vista and newer versions of Windows, the Windows desktop will lose its fancy graphical effects for a few moments before regaining them. Behind the scenes, Windows is restarting the malfunctioning graphics driver. But why doesn’t Windows simply fix the problem rather than restarting the driver or the computer itself?  Well, because it can’t — the code has encountered a problem and stopped working completely, so there’s no way for it to continue. By restarting, the code can start from square one and hopefully it won’t encounter the same problem again. Examples of Restarting Fixing Problems While certain problems require a complete restart because the operating system or a hardware driver has stopped working, not every problem does. Some problems may be fixable without a restart, though a restart may be the easiest option. Windows is Slow: Let’s say Windows is running very slowly. It’s possible that a misbehaving program is using 99% CPU and draining the computer’s resources. A geek could head to the task manager and look around, hoping to locate the misbehaving process an end it. If an average user encountered this same problem, they could simply reboot their computer to fix it rather than dig through their running processes. Firefox or Another Program is Using Too Much Memory: In the past, Firefox has been the poster child for memory leaks on average PCs. Over time, Firefox would often consume more and more memory, getting larger and larger and slowing down. Closing Firefox will cause it to relinquish all of its memory. When it starts again, it will start from a clean state without any leaked memory. This doesn’t just apply to Firefox, but applies to any software with memory leaks. Internet or Wi-Fi Network Problems: If you have a problem with your Wi-Fi or Internet connection, the software on your router or modem may have encountered a problem. Resetting the router — just by unplugging it from its power socket and then plugging it back in — is a common solution for connection problems. In all cases, a restart wipes away the current state of the software . Any code that’s stuck in a misbehaving state will be swept away, too. When you restart, the computer or device will bring the system up from scratch, restarting all the software from square one so it will work just as well as it was working before. “Soft Resets” vs. “Hard Resets” In the mobile device world, there are two types of “resets” you can perform. A “soft reset” is simply restarting a device normally — turning it off and then on again. A “hard reset” is resetting its software state back to its factory default state. When you think about it, both types of resets fix problems for a similar reason. For example, let’s say your Windows computer refuses to boot or becomes completely infected with malware. Simply restarting the computer won’t fix the problem, as the problem is with the files on the computer’s hard drive — it has corrupted files or malware that loads at startup on its hard drive. However, reinstalling Windows (performing a “Refresh or Reset your PC” operation in Windows 8 terms) will wipe away everything on the computer’s hard drive, restoring it to its formerly clean state. This is simpler than looking through the computer’s hard drive, trying to identify the exact reason for the problems or trying to ensure you’ve obliterated every last trace of malware. It’s much faster to simply start over from a known-good, clean state instead of trying to locate every possible problem and fix it. Ultimately, the answer is that “resetting a computer wipes away the current state of the software, including any problems that have developed, and allows it to start over from square one.” It’s easier and faster to start from a clean state than identify and fix any problems that may be occurring — in fact, in some cases, it may be impossible to fix problems without beginning from that clean state. Image Credit: Arria Belli on Flickr, DeclanTM on Flickr     

    Read the article

  • Installing Django on Shared Server: No module named MySQLdb?

    - by Mark
    I'm getting this error Traceback (most recent call last): File "/home/<username>/flup/server/fcgi_base.py", line 558, in run File "/home/<username>/flup/server/fcgi_base.py", line 1116, in handler File "/home/<username>/python/django/django/core/handlers/wsgi.py", line 241, in __call__ response = self.get_response(request) File "/home/<username>/python/django/django/core/handlers/base.py", line 73, in get_response response = middleware_method(request) File "/home/<username>/python/django/django/contrib/sessions/middleware.py", line 10, in process_request engine = import_module(settings.SESSION_ENGINE) File "/home/<username>/python/django/django/utils/importlib.py", line 35, in import_module __import__(name) File "/home/<username>/python/django/django/contrib/sessions/backends/db.py", line 2, in ? from django.contrib.sessions.models import Session File "/home/<username>/python/django/django/contrib/sessions/models.py", line 4, in ? from django.db import models File "/home/<username>/python/django/django/db/__init__.py", line 41, in ? backend = load_backend(settings.DATABASE_ENGINE) File "/home/<username>/python/django/django/db/__init__.py", line 17, in load_backend return import_module('.base', 'django.db.backends.%s' % backend_name) File "/home/<username>/python/django/django/utils/importlib.py", line 35, in import_module __import__(name) File "/home/<username>/python/django/django/db/backends/mysql/base.py", line 13, in ? raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb when I try to run this script on my shared server #!/usr/bin/python import sys, os sys.path.insert(0, "/home/<username>/python/django") sys.path.insert(0, "/home/<username>/python/django/www") # projects directory os.chdir("/home/<username>/python/django/www/<project>") os.environ['DJANGO_SETTINGS_MODULE'] = "<project>.settings" from django.core.servers.fastcgi import runfastcgi runfastcgi(method="threaded", daemonize="false") But, my web host just installed MySQLdb for me a few hours ago. When I run python from the shell I can import MySQLdb just fine. Why would this script report that it can't find it?

    Read the article

  • What is the correct way to synchronize a shared, static object in Java?

    - by johnrock
    This is a question concerning what is the proper way to synchronize a shared object in java. One caveat is that the object that I want to share must be accessed from static methods. My question is, If I synchronize on a static field, does that lock the class the field belongs to similar to the way a synchronized static method would? Or, will this only lock the field itself? In my specific example I am asking: Will calling PayloadService.getPayload() or PayloadService.setPayload() lock PayloadService.payload? Or will it lock the entire PayloadService class? public class PayloadService extends Service { private static PayloadDTO payload = new PayloadDTO(); public static void setPayload(PayloadDTO payload){ synchronized(PayloadService.payload){ PayloadService.payload = payload; } } public static PayloadDTO getPayload() { synchronized(PayloadService.payload){ return PayloadService.payload ; } } ... Is this a correct/acceptable approach ? In my example the PayloadService is a separate thread, updating the payload object at regular intervals - other threads need to call PayloadService.getPayload() at random intervals to get the latest data and I need to make sure that they don't lock the PayloadService from carrying out its timer task

    Read the article

  • Why slim reader/writer exclusive lock outperformance the shared one?

    - by Jichao
    I have tested the performance of slim reader/writer lock under windows 7 using the codefrom Windows Via C/C++. The result surprised me that the exclusive lock out performance the shared one. Here are the code and the result. unsigned int __stdcall slim_reader_writer_exclusive(void *arg) { //SRWLOCK srwLock; //InitializeSRWLock(&srwLock); for (int i = 0; i < 1000000; ++i) { AcquireSRWLockExclusive(&srwLock); g_value = 0; ReleaseSRWLockExclusive(&srwLock); } _endthreadex(0); return 0; } unsigned int __stdcall slim_reader_writer_shared(void *arg) { int b; for (int i = 0; i < 1000000; ++i) { AcquireSRWLockShared(&srwLock); //b = g_value; g_value = 0; ReleaseSRWLockShared(&srwLock); } _endthreadex(0); return 0; } g_value is a global int volatile variable. Could you kindly explain why this could happen?

    Read the article

  • Shared Git repo syncing to svn causing git svn rebase to pollute repo with a log of no-op merge prob

    - by John K
    This wasn't so bad at the beginning, but now I have hundreds of no-op merge problems (solved by git rebase --skip). I have setup a shared git repo for my group because it is easier to deal with. But the company uses SVN so I have to keep SVN in sync with GIT. Worked like a dream at first, but after weeks of doing this GIT is giving me a lot of the following errors. Applying: * making all config actions work Using index info to reconstruct a base tree... Falling back to patching base and 3-way merge... Auto-merging app/controllers/vulnerabilities_controller.rb CONFLICT (content): Merge conflict in app/controllers/vulnerabilities_controller.rb Auto-merging public/javascripts/network_analysis_vulnerability_config.js CONFLICT (content): Merge conflict in public/javascripts/network_analysis_vulnerability_config.js Failed to merge in the changes. Patch failed at 0046 * making all config actions work My workflow: git co master git pull origin git svn rebase ... deal with no-op merge problems ... git svn dcommit git pull origin git push origin The problem is that what is in SVN is the correct so I use git rebase --skip, but I have to do that hundreds of times before I can dcommit. How do I clear these merge problems permanently?

    Read the article

  • Why is function's length information of other shared lib in ELF?

    - by minastaros
    Our project (C++, Linux, gcc, PowerPC) consists of several shared libraries. When releasing a new version of the package, only those libs should change whose source code was actually affected. With "change" I mean absolute binary identity (the checksum over the file is compared. Different checksum - different version according to the policy). (I should mention that the whole project is always built at once, no matter if any code has changed or not per library). Usually this can by achieved by hiding private parts of the included Header files and not changing the public ones. However, there was a case where just a delete was added to the destructor of a class TableManager (in the TableManager.cpp file!) of library libTableManager.so, and yet the binary/checksum of library libB.so (which uses class TableManager ) has changed. TableManager.h: class TableManager { public: TableManager(); ~TableManager(); private: int* myPtr; } TableManager.cpp: TableManager::~TableManager() { doSomeCleanup(); delete myPtr; // this delete has been added } By inspecting libB.so with readelf --all libB.so, looking at the .dynsym section, it turned out that the length of all functions, even the dynamically used ones from other libraries, are stored in libB! It looks like this (length is the 668 in the 3rd column): 527: 00000000 668 FUNC GLOBAL DEFAULT UND _ZN12TableManagerD1Ev So my questions are: Why is the length of a function actually stored in the client lib? Wouldn't a start address be sufficient? Can this be suppressed somehow when compiling/linking of libB.so (kind of "stripping")? We would really like to reduce this degree of dependency...

    Read the article

  • debian lenny xen bridge networking problem

    - by Sasha
    DomU isn't talking to the world, but it talks to Dom0. Here are the tests that I made: Dom0 (external networking is working): ping 188.40.96.238 #Which is Domu's ip PING 188.40.96.238 (188.40.96.238) 56(84) bytes of data. 64 bytes from 188.40.96.238: icmp_seq=1 ttl=64 time=0.092 ms DomU: ping 188.40.96.215 #Which is Dom0's ip PING 188.40.96.215 (188.40.96.215) 56(84) bytes of data. 64 bytes from 188.40.96.215: icmp_seq=1 ttl=64 time=0.045 ms ping 188.40.96.193 #Which is the gateway - fail PING 188.40.96.193 (188.40.96.193) 56(84) bytes of data. ^C --- 188.40.96.193 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1013ms The system is debian lenny with a normal setup. Here is my configs: uname -a Linux green0 2.6.26-2-xen-686 #1 SMP Wed Aug 19 08:47:57 UTC 2009 i686 GNU/Linux cat /etc/xen/green1.cfg |grep -v '#' kernel = '/boot/vmlinuz-2.6.26-2-xen-686' ramdisk = '/boot/initrd.img-2.6.26-2-xen-686' memory = '2000' root = '/dev/xvda2 ro' disk = [ 'file:/home/xen/domains/green1/swap.img,xvda1,w', 'file:/home/xen/domains/green1/disk.img,xvda2,w', ] name = 'green1' vif = [ 'ip=188.40.96.238,mac=00:16:3E:1F:C4:CC' ] on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' ifconfig eth0 Link encap:Ethernet HWaddr 00:24:21:ef:2f:86 inet addr:188.40.96.215 Bcast:188.40.96.255 Mask:255.255.255.192 inet6 addr: fe80::224:21ff:feef:2f86/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3296 errors:0 dropped:0 overruns:0 frame:0 TX packets:2204 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:262717 (256.5 KiB) TX bytes:330465 (322.7 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) peth0 Link encap:Ethernet HWaddr 00:24:21:ef:2f:86 inet6 addr: fe80::224:21ff:feef:2f86/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:3407 errors:0 dropped:657431448 overruns:0 frame:0 TX packets:2291 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:319941 (312.4 KiB) TX bytes:338423 (330.4 KiB) Interrupt:16 Base address:0x8000 vif2.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:27 errors:0 dropped:0 overruns:0 frame:0 TX packets:151 errors:0 dropped:33 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:1164 (1.1 KiB) TX bytes:20974 (20.4 KiB) ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: peth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:24:21:ef:2f:86 brd ff:ff:ff:ff:ff:ff inet6 fe80::224:21ff:feef:2f86/64 scope link valid_lft forever preferred_lft forever 4: vif0.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 5: veth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 6: vif0.1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 7: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 8: vif0.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 9: veth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 10: vif0.3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 11: veth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 12: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:24:21:ef:2f:86 brd ff:ff:ff:ff:ff:ff inet 188.40.96.215/26 brd 188.40.96.255 scope global eth0 inet6 fe80::224:21ff:feef:2f86/64 scope link valid_lft forever preferred_lft forever 14: vif2.0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 32 link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever brctl show bridge name bridge id STP enabled interfaces eth0 8000.002421ef2f86 no peth0 vif2.0 ip r l Dom0: 188.40.96.192/26 dev eth0 proto kernel scope link src 188.40.96.215 default via 188.40.96.193 dev eth0 DomU: 188.40.96.192/26 dev eth0 proto kernel scope link src 188.40.96.238 default via 188.40.96.193 dev eth0

    Read the article

  • PortForwarding to IIS in Linux

    - by Simon
    Hi, I am trying to set up port forwarding on a linux box to a IIS webserver on my internal network. The web server sits on Windows 2003 Server. My linux box has eth0 - Internet connection eth1 - internal subnet (10.10.10.x) eth2 - 2nd internal subnet (129.168.0.x) dhcp interface my webserver is on the eth2 interface (192.168.0.6) I am doing port forwarding for port 80 with no avail. I use the same set of rules to port forward to a different webserver and it works. The webapplication is available on the internal network but not for external users. iptables -t nat -A PREROUTING -p tcp -i eth0 -d $PUBLIC_IP --dport 80 -j DNAT --to 192.168.0.6:80 iptables -A FORWARD -p tcp -i eth0 -o eth2 -d 192.168.0.6 --dport 80 -m state --state NEW -j ACCEPT iptables -A FORWARD -t filter -o eth0 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -t filter -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE Any Ideas?

    Read the article

  • How to go to a website on a shared server by its ip address?

    - by user1502776
    I have a few questions, please help: Fist, I can access google search just by typing http://74.125.224.211 because this is the ip address returned by nslookup. However, I could not do so with ip addresses returned from www.yahoo.com. How do I go to yahoo search page by its ip ? Another example, http://www.allaboutcircuits.com will resolve to 68.233.243.63 by DNS server, but if I go to http://68.233.243.63 I got "Hello world!" , lol ! Second, for some reason, there is something wrong with DNS resolvers with my web hosting service (it will not be fixed !!). So command like, get_file_contents("http://www.allaboutcircuits.com"); will return php_network_getaddresses: getaddrinfo failed: Name or service not known How do I get around this with IP address , 68.233.243.63 I mean somehow attach the HTTP hostname parameter to get_file_contents() ? I would like to solve this on my own side (in my code), no troubleshooting/adjustment will be done by server admin.

    Read the article

  • Gre Tunnel Cisco Linux traffic forwarding

    - by mezgani
    I setup a gre tunnel a cisco router and a Linux machine, the tunnel interface in the Linux box named pic. Well i have to forward traffic coming from cisco through the Linux box. the rules i've set in the Linux box is described as follow: echo "1" /proc/sys/net/ipv4/ip_forward iptables -A INPUT -p 47 -j ACCEPT iptables -A FORWARD -i ppp0 -j ACCEPT iptables -A FORWARD -i pic -o ppp0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i ppp0 -o pic -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A POSTROUTING -o ppp0 -j MASQUERADE I see the traffic coming from tunnel and forwarded to internet but no reply from sent packet. May i miss something like a routing rule.

    Read the article

  • Creating Shared Wifi Connection on Windows 7 WITHOUT wep/wpa security?

    - by Duncan
    Ok, for some reason I just can't figure this out on Windows 7. Never had an issue with it on Windows XP, never tried in Vista. I realize that I need to create an Ad-Hoc setup which I can do. However I can't get it to share the wireless signal on my laptop. Even if I go into the adapter settings. I need (well, want) to connect my old Psion netBook to my home network and due to the wireless card, I can't have any form of security. Thanks in advance!

    Read the article

  • git, egit, submodules, and symlinks -- how should shared sub-projects be handled in eclipse?

    - by Autophil
    Question: what's the best way to handle sub-projects in eclipse when using git for SCM? Here's the situation. I have a few git projects with a directory structure layed out more or less like this: simpleproj app www admin demo lib model orm view model user blah ... storeproj app www about mobile fbapp lib model orm view model user message cart product merchant Each directory in "lib" contains a separate project, either created in-house or forked, all of which use git for source control. So I figured I should make them submodules of my projects, right? Well, we've been moving toward eclipse + egit, because some of our windows guys not used to a CLI need something they can use without being scared of screwing things up. Anyway, the problem is, egit doesn't support submodules. So, my solution has been a rather crude one involving symlinks... lets say my directory structure on my dev box is generally layed out like this: ~/projects/ bigproj .git app lib model (- ~/lib/model/src/) orm (- ~/lib/orm/src/) neatproj .git app lib view (- ~/lib/view/src/) oldproj .git app lib orm (- ~/lib/orm/src/) ~/lib/ model .git src README.md orm .git src COPYING view .git src ...the symlinks link to a subdirectory of the directory containing the git repo, so eclipse doesn't get confused, and everything sort of works. On my machine, I can update the libs from anywhere and all projects will be updated (needing to be committed again of course). Each project stores a separate copy of the contents of the symlinked directories within "lib" -- but only when staged from within eclipse. After committing from eclipse and moving back to the CLI, git sees that a bunch of files have been removed and a few symlinks have been created. Of course this is acceptable also, probably more so than keeping a separate history of the libs for each project... but eclipse and CLI git obviously need to be on the same page so tons of files aren't vanishing and reappearing. So this brings me to my question. I'd like to know how to either: get eclipse+egit to see the symlinks as symlinks if git will somehow handle them properly*, or get the CLI git to treat them as non-symlinks. Or, if there's a better way to do this, I'm all ears. Hope this all made sense! :D Note: tried to tag this as git-submodules, but was not allowed :( * should I make them relative or absolute? Either way it's a mess. Also will symlinks will work on windows? i know there's something similar but you need a 3rd party tool to manage them AFAIK, i doubt these would translate well.

    Read the article

  • What's the state of the art in image upscaling?

    - by monov
    I like to collect cool pics and use them as wallpapers or for other things. Often, artists publish only low-res versions, probably for fear of theft. Example: Gabriel Pulecio's BIRDS Now, if I want to use that as a wallpaper, I'd have to upscale it, and obviously that'd make it look blurry because of the bicubic interpolation. I realize there's no real way to get a high-res version from a low-res pic, because the information is not simply there. That said, I'm wondering if heuristics have been developed for upscaling with less apparent loss of quality. Those would probably be optimized for specific image types. For photorealistic pictures, for cartoons with large flat areas, for pixel art... One algorithm I'm aware of is Seam Carving. It works for some kinds of pics, especially ones with a plain, undetailed or uninteresting background, and a subject that strongly stands out. But it's far from being general-purpose. Applying it to the above pic produces this. It looks quite sharp, but the proportions are horribly distorted because the algorithm is not designed for this kind of pic. Another is Pixel art scaling algorithms. Those are completely unfit for anything other than actual pixel art that's pixelized to begin with. For example, I tried the scale2x windows binary on my pic, but its output was nearly indistinguishable from nearest-neighbour scaling because the algorithm didn't detect any isolated pixely fragments to work from. Something else I tried was: I enlarged the image in Photoshop with bicubic interpolation, then I applied unsharp mask. The result looks pretty bad. The red blotch is actually resized reasonably well, but the dove is far from it. What I'm looking for is some app that makes a best-effort attempt at upscaling any input image while minimizing blurriness. If you know of any, I'll be thankful. Note that the subjective prettiness and sharpness of the result is what matters... the result doesn't need to be completely faithful to the original small image.

    Read the article

  • How to drop all subnets outside of the US using iptables

    - by Jim
    I want to block all subnets outside the US. I've made a script that has all of the US subnets in it. I want to disallow or DROP all but my list. Can someone give me an example of how I can start by denying everything? This is the output from -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:ftp state NEW DROP icmp -- anywhere anywhere Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination And these are the rules iptables --F iptables --policy INPUT DROP iptables --policy FORWARD DROP iptables --policy OUTPUT ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp -i eth0 --dport 21 -m state --state NEW -j ACCEPT iptables -A INPUT -p icmp -j DROP Just for clarity, with these rules, I can still connect to port 21 without my subnet list. I want to block ALL subnets and just open those inside the US.

    Read the article

  • How do I set up a shared internet on a network using computer hooked up to a router

    - by Skadlig
    I got a wireless broadband modem (Huawei E1750) hooked up to my computer (call it A., running Windows-7) whose internet I wish to share to my other computer (call it B., also running Windows-7). A. is hooked up to my d-link DIR-600 router using a wired connection to port 1 on the router. B. is connected to the router using a wireless connection. Now I have tried setting up the sharing according to the help files for ICS but I have not been able to get it working. I suspect that there is something in my hardware configuration that is making it difficult. I would appreciate some tips and pointers as to what could be the reason to my problems.

    Read the article

  • How to make Shared Keys .ssh/authorized_keys and sudo work together?

    - by farinspace
    I've setup the .ssh/authorized_keys and am able to login with the new "user" using the pub/private key ... I have also added "user" to the sudoers list ... the problem I have now is when I try to execute a sudo command, something simple like: $ sudo cd /root it will prompt me for my password, which I enter, but it doesn't work (I am using the private key password I set) Also, ive disabled the users password using $ passwd -l user What am I missing? Somewhere my initial remarks are being misunderstood ... I am trying to harden my system ... the ultimate goal is to use pub/private keys to do logins versus simple password authentication. I've figured out how to set all that up via the authorized_keys file. Additionally I will ultimately prevent server logins through the root account. But before I do that I need sudo to work for a second user (the user which I will be login into the system with all the time). For this second user I want to prevent regular password logins and force only pub/private key logins, if I don't lock the user via" passwd -l user ... then if i dont use a key, i can still get into the server with a regular password. But more importantly I need to get sudo to work with a pub/private key setup with a user whos had his/her password disabled. Edit: Ok I think I've got it (the solution): 1) I've adjusted /etc/ssh/sshd_config and set PasswordAuthentication no This will prevent ssh password logins (be sure to have a working public/private key setup prior to doing this 2) I've adjusted the sudoers list visudo and added root ALL=(ALL) ALL dimas ALL=(ALL) NOPASSWD: ALL 3) root is the only user account that will have a password, I am testing with two user accounts "dimas" and "sherry" which do not have a password set (passwords are blank, passwd -d user) The above essentially prevents everyone from logging into the system with passwords (a public/private key must be setup). Additionally users in the sudoers list have admin abilities. They can also su to different accounts. So basically "dimas" can sudo su sherry, however "dimas can NOT do su sherry. Similarly any user NOT in the sudoers list can NOT do su user or sudo su user. NOTE The above works but is considered poor security. Any script that is able to access code as the "dimas" or "sherry" users will be able to execute sudo to gain root access. A bug in ssh that allows remote users to log in despite the settings, a remote code execution in something like firefox, or any other flaw that allows unwanted code to run as the user will now be able to run as root. Sudo should always require a password or you may as well log in as root instead of some other user.

    Read the article

  • How can I access shared XP files over wifi with an Archos 5 Internet Tablet with Android?

    - by Fred
    I have an Archos 5 Internet tablet with Android. I have been using it to stream video over my XP wifi network. Yesterday the Archos stopped seeing any of my computers. I have found some programs that allow me to see the network, so I know the hardware works. However, this program does not allow me to pick the correct program to play the files I want. I suspect the problem lies somewhere on the XP side. This would probably work better if the Archos 5 allowed me to manually input the IP address. Help?

    Read the article

  • How can I access shared XP files over wifi with an Archos 5 Internet Tablet with Android?

    - by Fred
    I have an Archos 5 Internet tablet with Android. I have been using it to stream video over my XP wifi network. Yesterday the Archos stopped seeing any of my computers. I have found some programs that allow me to see the network, so I know the hardware works. However, this program does not allow me to pick the correct program to play the files I want. I suspect the problem lies somewhere on the XP side. This would probably work better if the Archos 5 allowed me to manually input the IP address. Help?

    Read the article

  • Restoring the exact state of a linux install to a different laptop with different sized drives and other hardware

    - by user259774
    I have an IBM running a Manjaro install that has already been used and settled into, with packages installed, browser profiles, etc, etc. The drive is 60gb, and it has a swap partition and an ext4 root partition. I need to move this profile to a Toshiba computer with a 320gb drive. How should I go about this? My inclination would be to shut down the toshiba, boot a live linux system, dd the whole 60gb drive to a file, boot the toshiba to a live system, then dd the file to its 320gb drive. Would this work? I know that it wouldn't with windows, but I believe this is an artificially imposed limitation from Microsoft. Is this correct, or is Linux similarly limited? If not, how could I go about this? Would clonezilla work, or would the hardware disparities prevent it from working?

    Read the article

  • solaris + dladm + what is unknown state and how to bring it to up?

    - by yael
    I installed Solaris 10 on my netra machine from dladm show-dev I can see which interface are down or up all interfaces are connected to the Cisco switch , and all leds are light's on all LAN cards but I not understand why all interfaces except e1000g0 are in unknown ? Please advice how to bring the unknown interfaces to up ? # dladm show-dev e1000g0 link: up speed: 1000 Mbps duplex: full e1000g1 link: unknown speed: 0 Mbps duplex: unknown e1000g2 link: unknown speed: 0 Mbps duplex: unknown e1000g3 link: unknown speed: 0 Mbps duplex: unknown nxge0 link: unknown speed: 0 Mbps duplex: unknown nxge1 link: unknown speed: 0 Mbps duplex: unknown nxge2 link: unknown speed: 0 Mbps duplex: unknown nxge3 link: unknown speed: 0 Mbps duplex: unknown

    Read the article

  • Where is the best location to keep shared-developer website files in the linux hierarchy?

    - by Tchalvak
    I just started hosting files for a website on my server, and I'm not sure where is an appropriate place to keep them. At the moment, I have them in /var/www/name.of.virtualhost.site/www/. That's obviously not secure because anything below the final public /www/ folder is also available since the /var/www/ contents are already being served up. For example, /var/www/name.of.virtualhost.site/docs/site_policies.txt is accessible via something like defaultsite.com/name.of.virtualhost.site/docs/site_policies.txt. So where is a good place to store the files that make up a website? (when it's a site that only I'm developing, I can obviously just stick them in /home/my_username/sites/name.of.virtualhost.site/, but that doesn't work well when I want other developers to be working on the site's files as well) I'm running a LAMP stack, not that I expect it to matter.

    Read the article

  • Very Large number of connections in TIME_WAIT state; Server is slow, ipconntrac

    - by Sparsh Gupta
    I have a nginx server with load balancing and reverse proxy. Right now its behing another nginx but very soon I plan to make it front, where it will receive TCP connections from clients directly at a rate of 500req/second I am having some big troubles with the server. I have pasted my configurations here and I am kinda sure that the problem is with ipconntrac and similar things which are alient to me http://paste.org/pastebin/view/28543 root@load_balancer:/proc/sys/net/ipv4# netstat -an|awk '/tcp/ {print $6}'|sort|uniq -c 67 CLOSING 727 ESTABLISHED 173 FIN_WAIT1 183 FIN_WAIT2 19 LAST_ACK 5 LISTEN 447 SYN_RECV 1 SYN_SENT 27970 TIME_WAIT Its a ubuntu machine with mainly nginx (load balancer and reverse proxy) installed. It surely isnt great. Can you help me understand whats going on and how can I fix it. This is my live server and I am sure its in a bad shape right now. Any document or commands to fix this, or settings I should make to make this better and reduce time wait and fin_wait1/2 better would be awesome.

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >