Search Results

Search found 328 results on 14 pages for 'dst'.

Page 12/14 | < Previous Page | 8 9 10 11 12 13 14  | Next Page >

  • Trouble setting up incoming VPN in Microsoft SBS 2008 through a Cisco ASA 5505 appliance

    - by Nils
    I have replaced an aging firewall (custom setup using Linux) with a Cisco ASA 5505 appliance for our network. It's a very simple setup with around 10 workstations and a single Small Business Server 2008. Setting up incoming ports for SMTP, HTTPS, remote desktop etc. to the SBS went fine - they are working like they should. However, I have not succeeded in allowing incoming VPN connections. The clients trying to connect (running Windows 7) are stuck with the "Verifying username and password..." dialog before getting an error message 30 seconds later. We have a single external, static IP, so I cannot set up the VPN connection on another IP address. I have forwarded TCP port 1723 the same way as I did for SMTP and the others, by adding a static NAT route translating traffic from the SBS server on port 1723 to the outside interface. In addition, I set up an access rule allowing all GRE packets (src any, dst any). I have figured that I must somehow forward incoming GRE packets to the SBS server, but this is where I am stuck. I am using ADSM to configure the 5505 (not console). Any help is very much appreciated!

    Read the article

  • Cant get squid proxy to work

    - by danielgratz
    i need squid proxy on my centos server. But i just can't get it to work. I did yum install squid. Here is my squid.conf file (i removed all comments): acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 acl Safe_ports port 80 acl Safe_ports port 21 acl Safe_ports port 443 acl Safe_ports port 70 acl Safe_ports port 210 acl Safe_ports port 1025-65535 acl Safe_ports port 280 acl Safe_ports port 488 acl Safe_ports port 591 acl Safe_ports port 777 acl CONNECT method CONNECT acl our_networks src 192.168.1.0/24 192.168.2.0/24 http_access allow our_networks http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access deny all icp_access allow all http_port 3128 hierarchy_stoplist cgi-bin ? access_log /var/log/squid/access.log squid acl QUERY urlpath_regex cgi-bin \? cache deny QUERY refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl apache rep_header Server ^Apache broken_vary_encoding allow apache coredump_dir /var/spool/squid Then i just put my server's public ip and port 3128 into my web browsers proxy settings... but it isn't working i can't visit any website. Please help. Thanks.

    Read the article

  • iptables, forward traffic for ip not active on the host itself

    - by gucki
    I have kvm guest which's netword card is conntected to the host using a tap device. The tap device is part of a bridge on the host together with eth0 so it can access the public network. So far everything works, the guest can access the public network and it can be accessed from the public network. Now the kvm process on the host provides a vnc server for the guest which listens on 127.0.0.1:5901 on the host. Is there any way to make this vnc server accessible by the ip address which the guest is using (ex. 192.168.0.249), without interrupting the guest from using the same ip (port 5901 is not used by the guest)? It should also work when the guest is not using any ip address at all. So basically I just want to fake IP xx is on the host and only answer/ forward traffic to port 5901 to the host itself. I tried using this NAT rule on the host, but it doesn't work. Ip forwarding is enabled at the host. iptables -t nat -A PREROUTING -p tcp --dst 192.168.0.249 --dport 5901 -j DNAT --to-destination 127.0.0.1:5901 I assume this is because the IP 192.168.0.249 is not not bound to any interfaces and so no ARP requests for it get answered and so no packets for this IP arrive at the host. How can make it work? :)

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

  • MAC-Address based routing

    - by d-fens
    Here is what i want to do: I have a bunch of systems, some might have the same Public-IP, i disable ARP. I have a Firewall (either IP Layer or bridge-FW) between these systems and the internet. Depending on the destination port of incoming IP-Packets to some of these Public-IPs i want to set the destinsation-Ethernet-Adress. So for instance System A has IP 8.8.8.8, mac de:ad:be:ef:de:ad, arp disabled System B has IP 8.8.8.8, mac 1f:1f:1f:1f:1f:1f, arp disabled Firewall has IP 8.8.8.1, arp disabled on that interface Incoming packet to IP 8.8.8.8 tcp dest port 100 Incoming packet to IP 8.8.8.8 tcp dest port 101 Firewall sets dest-mac for 1.) - de:ad:be:ef:de:ad Firewall sets dest-mac for 2.) - 1f:1f:1f:1f:1f:1f Second scenario: System A and System B establish outgoing TCP-Connections, and the firewall matches the dst-mac of the incoming IP-Packets (response packets) to the senders-mac address. is this possible in any way with linux and iptables? edit: i read ebtables might "work" in a hackish way for this purpose but i am not sure...

    Read the article

  • Can't connect to FTP server from a specific location

    - by wv_pip
    Last week while uploading website files to our server via FTP, the transfer failed. Ever since then, I haven't been able to connect to the server from work. I can connect just fine from home, or by using an FTP app on my cell phone as long as I'm on the cell network. I can't access the server from any machine on my work network. It's not a credential issue, either. The error message that I always get says that a connection cannot be established, and I am never prompted for my credentials. I have changed absolutely nothing on our domain controller or our firewall/router. I've contacted our ISP (who hosts the website/FTP server) and they can't find anything wrong on their end. They insist that it must be something here at the office that is blocking access. I've also tested access to other FTP servers (ea.com, nvidia.com, etc.) so I know that port 21 is not being blocked. I'm totally stumped. Any help is much appreciated. EDIT: wireshark info here: http://www.cloudshark.org/captures/85a118ae9296?filter=ip.dst%3D%3D66.118.64.208

    Read the article

  • How to validate selects / inserts are hitting the right server with MySQL Master/Slave

    - by bwizzy
    I've got a rails app using the master_slave_adapter plugin (http://github.com/mauricio/master_slave_adapter/tree/master) to send all selects to a slave, and all other statements to the master. Replication is setup using Mysql master / slave. I'm trying to validate that all the SQL statements are indeed going to the right place. Selects to the slave (db2), inserts to the master (db1) but I'm not sure how to do it. I've tried using tcpdump on the webservers: sudo /usr/sbin/tcpdump -q -i eth0 dst port 3306 and this is the output for a page request with a ton of selects: 10:32:36.570930 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 0 10:32:36.576805 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 0 10:32:36.577201 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 0 10:32:36.577980 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 86 10:32:36.578186 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 21 10:32:36.578359 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 27 10:32:36.578522 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 5 10:32:36.578741 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 13 10:32:36.579611 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 29 10:32:36.588201 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 0 10:32:36.588323 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 0 10:32:36.588677 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 0 10:32:36.588784 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 86 It doesn't look like all the selects are going to the slave. Maybe this isn't the right way to test, anyone know a better way?

    Read the article

  • how to use gettimeofday() or something equivalent with Visual Studio C++ 2008?

    - by make
    Hi, Could someone please help me to use gettimeofday() function with Visual Studio C++ 2008 on Windows XP? here is a code that I found somewhere on the net: #include < time.h > #include <windows.h> #if defined(_MSC_VER) || defined(_MSC_EXTENSIONS) #define DELTA_EPOCH_IN_MICROSECS 11644473600000000Ui64 #else #define DELTA_EPOCH_IN_MICROSECS 11644473600000000ULL #endif struct timezone { int tz_minuteswest; /* minutes W of Greenwich */ int tz_dsttime; /* type of dst correction */ }; int gettimeofday(struct timeval *tv, struct timezone *tz) { FILETIME ft; unsigned __int64 tmpres = 0; static int tzflag; if (NULL != tv) { GetSystemTimeAsFileTime(&ft); tmpres |= ft.dwHighDateTime; tmpres <<= 32; tmpres |= ft.dwLowDateTime; /*converting file time to unix epoch*/ tmpres -= DELTA_EPOCH_IN_MICROSECS; tmpres /= 10; /*convert into microseconds*/ tv->tv_sec = (long)(tmpres / 1000000UL); tv->tv_usec = (long)(tmpres % 1000000UL); } if (NULL != tz) { if (!tzflag) { _tzset(); tzflag++; } tz->tz_minuteswest = _timezone / 60; tz->tz_dsttime = _daylight; } return 0; } ... // call gettimeofday() gettimeofday(&tv, &tz); tm = localtime(&tv.tv_sec); Last yesr when I tested this code VC++6, it works fine. But now when I use VC++ 2008, I am getting error of exception handling. So is there any idea on how to use gettimeofday or something equivalent? Thanks for your reply and any help would be very appreciated:

    Read the article

  • Scapy Installed, when i use it as module Its full of errors ???

    - by Rami Jarrar
    I installed scapy 2.xx (after get some missed modules to make it install),, then i'm trying to use it as module in my python programs,, but i cant it give me alot of errors, I download and installed some missed modules and finally i'm depressed, because this error, after hard work i got this Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> from scapy.all import * File "C:\Python26\scapy\all.py", line 43, in <module> from crypto.cert import * File "C:\Python26\scapy\crypto\cert.py", line 15, in <module> from Crypto.PublicKey import * File "C:\Python26\lib\Crypto\PublicKey\RSA.py", line 34, in <module> from Crypto import Random File "C:\Python26\lib\Crypto\Random\__init__.py", line 29, in <module> import _UserFriendlyRNG File "C:\Python26\lib\Crypto\Random\_UserFriendlyRNG.py", line 36, in <module> from Crypto.Random.Fortuna import FortunaAccumulator File "C:\Python26\lib\Crypto\Random\Fortuna\FortunaAccumulator.py", line 36, in <module> import FortunaGenerator File "C:\Python26\lib\Crypto\Random\Fortuna\FortunaGenerator.py", line 32, in <module> from Crypto.Util import Counter File "C:\Python26\lib\Crypto\Util\Counter.py", line 27, in <module> import _counter ImportError: No module named _counter by do the following code: from scapy.all import * p=sr1(IP(dst=ip_dst)/ICMP()) if p: p.show() so what should i do,, is there a solution for this ???

    Read the article

  • Date since 1600 to NSDate?

    - by Steven Fisher
    I have a date that's stored as a number of days since January 1, 1600 that I need to deal with. This is a legacy date format that I need to read many, many times in my application. Previously, I'd been creating a calendar, empty date components and root date like this: self.gregorian = [[[NSCalendar alloc] initWithCalendarIdentifier: NSGregorianCalendar ] autorelease]; id rootComponents = [[[NSDateComponents alloc] init] autorelease]; [rootComponents setYear: 1600]; [rootComponents setMonth: 1]; [rootComponents setDay: 1]; self.rootDate = [gregorian dateFromComponents: rootComponents]; self.offset = [[[NSDateComponents alloc] init] autorelease]; Then, to convert the integer later to a date, I use this: [offset setDay: theLegacyDate]; id eventDate = [gregorian dateByAddingComponents: offset toDate: rootDate options: 0]; (I never change any values in offset anywhere else.) The problem is I'm getting a different time for rootDate on iOS vs. Mac OS X. On Mac OS X, I'm getting midnight. On iOS, I'm getting 8:12:28. (So far, it seems to be consistent about this.) When I add my number of days later, the weird time stays. OS | legacyDate | rootDate | eventDate ======== | ========== | ==========================|========================== Mac OS X | 143671 | 1600-01-01 00:00:00 -0800 | 1993-05-11 00:00:00 -0700 iOS | 143671 | 1600-01-01 08:12:28 +0000 | 1993-05-11 07:12:28 +0000 In the previous release of my product, I didn't care about the time; now I do. Why the weird time on iOS, and what should I do about it? (I'm assuming the hour difference is DST.) I've tried setting the hour, minute and second of rootComponents to 0. This has no impact. If I set them to something other than 0, it adds them to 8:12:28. I've been wondering if this has something to do with leap seconds or other cumulative clock changes. Or is this entirely the wrong approach to use on iOS?

    Read the article

  • Git-svn refuses to create branch on svn repository error: "not in the same repository"

    - by Danny
    I am attempting to create a svn branch using git-svn. The repository was created with --stdlayout. Unfortunately it generates an error stating the "Source and dest appear not to be in the same repository". The error appears to be the result of it not including the username in the source url. $ git svn branch foo-as-bar -m "Attempt to make Foo into Bar." Copying svn+ssh://my.foo.company/r/sandbox/foo/trunk at r1173 to svn+ssh://[email protected]/r/sandbox/foo/branches/foo-as-bar... Trying to use an unsupported feature: Source and dest appear not to be in the same repository (src: 'svn+ssh://my.foo.company/r/sandbox/foo/trunk'; dst: 'svn+ssh://[email protected]/r/sandbox/foo/branches/foo-as-bar') at /home/me/.install/git/libexec/git-core/git-svn line 610 I intially thought this was simply a configuration issue, examination of .git/config doesn't suggest anything incorrect. [svn-remote "svn"] url = svn+ssh://[email protected]/r fetch = sandbox/foo/trunk:refs/remotes/trunk branches = sandbox/foo/branches/*:refs/remotes/* tags = sandbox/foo/tags/*:refs/remotes/tags/* I am using git version 1.6.3.3. Can anyone shed any light on why this might be occuring, and how best to address it?

    Read the article

  • Hey Guy , I want ot streaming video by using VideView class . Can anyone tell me what format is it s

    - by eddyxd
    Hi , I am the newbie of android, but i hava seen the tutorial and implement some simple applications. The question i met is that I am tring to stream some video from my server to android, but the android VideoView class just plays the audition sololy without "image"@@!~ Here is my setting and android code : 1. android core code: mVideoView01.setVideoURI(Uri.parse("rtsp://192.168.16.1:8080/test.sdp")); mVideoView01.start(); 2. my streaming server is VLC and the command is: vlc -vvv d:\nobody.mp4 --sout=#transcode{vcodec=h264,width=320,hegiht=240}:rtp{dst=192.168.16.1,port=4444,sdp=rtsp://192.168.16.1:8080/test.sdp} ps: My ip is got from DHCP but I have checked it really can be connected(Android could play audition after all) ps2: I haved trid to stream some video from "http://www.americafree.tv/" and the playing is good!!@@ So I guess that the problem maybe is caused by streaming Video format, but I have almost tried every figument option form VLC, and it still don't workQQ. So Have anyone done the same test as me can give me some advice?? Thanks a lot!!!!! by eddy

    Read the article

  • Math on Django Templates

    - by Leandro Abilio
    Here's another question about Django. I have this code: views.py cursor = connections['cdr'].cursor() calls = cursor.execute("SELECT * FROM cdr where calldate > '%s'" %(start_date)) result = [SQLRow(cursor, r) for r in cursor.fetchall()] return render_to_response("cdr_user.html", {'calls':result }, context_instance=RequestContext(request)) I use a MySQL query like that because the database is not part of a django project. My cdr table has a field called duration, I need to divide that by 60 and multiply the result by a float number like 0.16. Is there a way to multiply this values using the template tags? If not, is there a good way to do it in my views? My template is like this: {% for call in calls %} <tr class="{% cycle 'odd' 'even' %}"><h3> <td valign="middle" align="center"><h3>{{ call.calldate }}</h3></td> <td valign="middle" align="center"><h3>{{ call.disposition }}</h3></td> <td valign="middle" align="center"><h3>{{ call.dst }}</h3></td> <td valign="middle" align="center"><h3>{{ call.billsec }}</h3></td> <td valign="middle" align="center">{{ (call.billsec/60)*0.16 }}</td></h3> </tr> {% endfor %} The last is where I need to show the value, I know the "(call.billsec/60)*0.16" is impossible to be done there. I wrote it just to represent what I need to show.

    Read the article

  • Possible to Inspect Innards of Core C# Functionality

    - by Nick Babcock
    I was struck today, with the inclination to compare the innards of Buffer.BlockCopy and Array.CopyTo. I am curious to see if Array.CopyTo called Buffer.BlockCopy behind the scenes. There is no practical purpose behind this, I just want to further my understanding of the C# language and how it is implemented. Don't jump the gun and accuse me of micro-optimization, but you can accuse me of being curious! When I ran ILasm on mscorlib.dll I received this for Array.CopyTo .method public hidebysig newslot virtual final instance void CopyTo(class System.Array 'array', int32 index) cil managed { // Code size 0 (0x0) } // end of method Array::CopyTo and this for Buffer.BlockCopy .method public hidebysig static void BlockCopy(class System.Array src, int32 srcOffset, class System.Array dst, int32 dstOffset, int32 count) cil managed internalcall { .custom instance void System.Security.SecuritySafeCriticalAttribute::.ctor() = ( 01 00 00 00 ) } // end of method Buffer::BlockCopy Which, frankly, baffles me. I've never run ILasm on a dll/exe I didn't create. Does this mean that I won't be able to see how these functions are implemented? Searching around only revealed a stackoverflow question, which Marc Gravell said [Buffer.BlockCopy] is basically a wrapper over a raw mem-copy While insightful, it doesn't answer my question if Array.CopyTo calls Buffer.BlockCopy. I'm specifically interested in if I'm able to see how these two functions are implemented, and if I had future questions about the internals of C#, if it is possible for me to investigate it. Or am I out of luck?

    Read the article

  • Python: Copying files with special characters in path

    - by erikderwikinger
    Hi is there any possibility in Python 2.5 to copy files having special chars (Japanese chars, cyrillic letters) in their path? shutil.copy cannot handle this. here is some example code: import copy, os,shutil,sys fname=os.getenv("USERPROFILE")+"\\Desktop\\testfile.txt" print fname print "type of fname: "+str(type(fname)) fname0 = unicode(fname,'mbcs') print fname0 print "type of fname0: "+str(type(fname0)) fname1 = unicodedata.normalize('NFKD', fname0).encode('cp1251','replace') print fname1 print "type of fname1: "+str(type(fname1)) fname2 = unicode(fname,'mbcs').encode(sys.stdout.encoding) print fname2 print "type of fname2: "+str(type(fname2)) shutil.copy(fname2,'C:\\') the output on a Russian Windows XP C:\Documents and Settings\+????????????\Desktop\testfile.txt type of fname: <type 'str'> C:\Documents and Settings\?????????????\Desktop\testfile.txt type of fname0: <type 'unicode'> C:\Documents and Settings\+????????????\Desktop\testfile.txt type of fname1: <type 'str'> C:\Documents and Settings\?????????????\Desktop\testfile.txt type of fname2: <type 'str'> Traceback (most recent call last): File "C:\Test\getuserdir.py", line 23, in <module> shutil.copy(fname2,'C:\\') File "C:\Python25\lib\shutil.py", line 80, in copy copyfile(src, dst) File "C:\Python25\lib\shutil.py", line 46, in copyfile fsrc = open(src, 'rb') IOError: [Errno 2] No such file or directory: 'C:\\Documents and Settings\\\x80\ xa4\xac\xa8\xad\xa8\xe1\xe2\xe0\xa0\xe2\xae\xe0\\Desktop\\testfile.txt'

    Read the article

  • php error reporting - having trouble matching local & web server settings

    - by Andrew Heath
    I'm trying to add a custom error handler to my site, but in doing so have discovered that my webhost's PHP error reporting settings and those of my localhost (default XAMPP) vary considerably. While I thought I was programming to E_STRICT like a good little boy, adding the error handler to my webhost revealed craploads of Runtime Notices. Example: Runtime notice strtotime() [function.strtotime]: It is not safe to rely on the system's timezone settings. Please use the date.timezone setting, the TZ environment variable or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead In /home/... Clearly this isn't a red-alert, showstopping error. But what bothers me is that it doesn't show up on my localhost. I'd certainly like to improve my code by addressing these sorts of issues if I could see them! I've looked through both php.ini files, and my webhost's setting is error_reporting = E_ALL & ~E_NOTICE whereas mine was error_reporting = E_STRICT, which I had thought was better. However, changing mine to match and rebooting the server doesn't seem to have accomplished anything. Could someone please point me in the right direction?

    Read the article

  • File.Move, why do i get a FileNotFoundException? The file exist...

    - by acidzombie24
    Its extremely weird since the program is iterating the file! outfolder and infolder are both in H:/ my external HD using windows 7. The idea is to move all folders that only contain files with the extention db and svn-base. When i try to move the folder i get an exception. VS2010 tells me it cant find the folder specified in dir. This code is iterating through dir so how can it not find it! this is weird. string []theExt = new string[] { "db", "svn-base" }; foreach (var dir in Directory.GetDirectories(infolder)) { bool hit = false; if (Directory.GetDirectories(dir).Count() > 0) continue; foreach (var f in Directory.GetFiles(dir)) { var ext = Path.GetExtension(f).Substring(1); if(theExt.Contains(ext) == false) { hit = true; break; } } if (!hit) { var dst = outfolder + "\\" + Path.GetFileName(dir); File.Move(dir, outfolder); //FileNotFoundException: Could not find file dir. } } }

    Read the article

  • Check if DateTime in specific range

    - by katit
    Need to check if DateTime is in specific range. I think I need to calculate knowing YEAR first and last date of DST time in this year. How would I figure "Sunday of week 2 of March" date? From 1/1/2007 12:00:00 AM to 12/31/9999 12:00:00 AM Begins at 2:00 AM on Sunday of week 2 of March Ends at 2:00 AM on Sunday of week 1 of November For example, I need to check if 11/21/2011 is between Sunday of week 2 in March and Sunday of week 1 of November - answer should be NO If I pass 8/8/2011 - answer should be yes. Basically, I need to write function to check if my date belongs to daylight savings time. My only idea so far is to write loops to find 2nd week for example. So, I would loop from Day 1 in March until I hit Sunday second time. Same thing I would loop (increment days by 1) from day 1 of November until I hit Sunday first time. In another words, I need function to check if input data is in Daylight Savings time period. Time period defined by constraint above. P.S. I can't use TimeZoneInfo since it's in Silverlight P.P.S I can't use DateTime.IsDaylightSavingsTime as I don't have times with kind "local"

    Read the article

  • Is it possible to achieve MAX(As,Ad) openGL blending?

    - by Jeff B
    I am working on a game where I want to create shadows under a series of sprites on a grid. The shadows are larger than the sprites themselves and the sprites are animated (i.e. move and rotate). I cannot simply render them into the sprite png, or the shadows will overlap adjacent sprites. I also cannot simply put shadows on a lower layer by themselves, because when they overlap, they will create dark bands at their intersection. These sprites are animated, so it is not feasible to render these en masse. Basically, I want the sprites' shadows to blend together such that they max out at a set opacity. Example: I believe this is equivalent to an openGL blending of (Rs,Gs,Bs,Max(As,Ds)), where I don't really care about R,G, and B, as it will always be the same color in src and dst. However, this is not a valid openGL blending mode. Is there an easy way to accomplish this, especially in cocos2d-iphone? I would be able to approximate this by making the shadow sprites opaque, then applying them both to a parent sprite, and making the parent sprite 40% opacity. However, the way cocos2d works, this only sets the opacity of each child to 40%, rather than the combined sprite image, which results in the same stripe.

    Read the article

  • Picture.writeToStream() not writing out all bitmaps

    - by quickdraw mcgraw
    I'm using webview.capturePicture() to create a Picture object that contains all the drawing objects for a webpage. I can successfully render this Picture object to a bitmap using the canvas.drawPicture(picture, dst) with no problems. However when I use picture.writeToStream(fos) to serialize the picture object out to file, and then Picture.createFromStream(fis) to read the data back in and create a new picture object, the resultant bitmap when rendered as above is missing any larger images (anything over around 20KB! by observation). This occurs on all the Android OS platforms that I have tested 1.5, 1.6 and 2.1. Looking at the native code for Skia which is the underlying Android graphics library and the output file produced from the picture.writeToStream() I can see how the file format is constructed. I can see that some of the images in this Skia spool file are not being written out (the larger ones), the code that appears to be the problem is in skBitmap.cpp in the method void SkBitmap::flatten(SkFlattenableWriteBuffer& buffer) const; It writes out the bitmap fWidth, fHeight, fRowBytes, FConfig and isOpaque values but then just writes out SERIALIZE_PIXELTYPE_NONE (0). This means that the spool file does not contain any pixel information about the actual image and therefore cannot restore the picture object correctly. Effectively this renders the writeToStream and createFromStream() APIs useless as they do not reliably store and recreate the picture data. Has anybody else seen this behaviour and if so am I using the API incorrectly, can it be worked around, is there an explanation i.e. incomplete API / bug and if so are there any plans for a fix in a future release of Android? Thanks in advance.

    Read the article

  • jQuery $.each()-problem

    - by Volmar
    Hi, im making a wordpress plugin and i have a function where i import images, this is done with a $.each()-loop that calls a .load()-function every iteration. The load-function page the load-function calls is downloading the image and returns a number. The number is imported into a span-element. The source and destination Arrays is being imported from LI-elemnts of a hidden ULs. this way the user sees a counter counting from zero up to the total number of images being imported. You can se my jQuery code below: jQuery(document).ready(function($) { $('#mrc_imp_img').click(function(){ var dstA = []; var srcA = []; $("#mrc_dst li").each(function() { dstA.push($(this).text()) }); $("#mrc_src li").each(function() { srcA.push($(this).text()) }); $.each(srcA, function (i,v) { $('#mrc_imgimport span.fc').load('/wp-content/plugins/myplugin/imp.php?num='+i+'&dst='+dstA[i]+'&src='+srcA[i]); }); }); }); This works pretty good but sometimes it looks like the load function isn't updating the DOM as fast as it should because sometimes the numbers that the span is updated with is lower than the previous and almost everytime a lower number is replacing the last number in the end. How can i prevent this from happening and how can i make it hide '#mrc_imp_img' when the $.each-loop is ready?

    Read the article

  • Bladecenter-E Power Module fault

    - by Lihnjo
    We have problem on IBM Bladecenter-E Critical Events Power module 2 is off. DC fault. Power module 4 is off. DC fault. Warnings and System Events Insufficient chassis power to support redundancy What is the best solution for this problem? Thanks AMM Service Data Help SPAPP Capture Available 10/13/2010 17:03:47 1090347 bytes Time: 11/19/2012 11:02:31 UUID: 42E1 5D2F D7BF 41A6 A4A2 48D1 3FB7 0540 MAC Address xx:xx:xx:xx:xx:xx MM Information Name: nnnnn Contact: aaa, bbb, ccc, England Location: [email protected] IP address: 111.222.333.444 Date Time Information GMT offset: +1:00 - Central Europe Time (Western Europe, Algeria, Nigeria, Angola) Adjust for DST: Yes NTP: Enabled NTP Hostname/IP: 111.222.333.444 System Health: Critical System Status Summary One or more monitored parameters are abnormal. Critical Events Power module 2 is off. DC fault. Power module 4 is off. DC fault. Warnings and System Events Insufficient chassis power to support redundancy CHASSIS (BladeCenter-E) in CHASSIS slot: 01 TopoPath is "CHASSIS[1]". Description : BladeCenter-E Width : 1 Sub Type : BladeCenter (BC) Power Mode : 220 v KVM Owner : CHASSIS[1]/BLADE[9] MT Owner : CHASSIS[1]/MGMT_MOD[1] Component Type : CHASSIS Inventory: VPD ID: 336 (decimal) POS ID EXT: 0 (decimal) POS ID: 8 (decimal) Machine Type/Model: 86773RG Machine Serial Number: 99ZL816 Part Number: 39R8561 FRU Number: 39R8563 FRU Serial Number: YK109174W1HV Manufacturer ID: IBM Hardware Revision: 3 (decimal) Manufacture Date: 18 (wk), 07 (yr) UUID: 42E1 5D2F D7BF 41A6 A4A2 48D1 3FB7 0540 (hex) Type Code: 97 (decimal) Sub-type Code: 0 (decimal) IANA Num: 336 (decimal) Product ID: 8 (decimal) Manufacturer Sub ID: FOXC Enviroment data: -------------- Type: : POWER_USAGE Unit: : WATTS Reading: : 0xa Sensor Label: : Midplane Sensor ID: : 0x0 MGMT MOD (Advanced Management Module) in MGMT_MOD slot: 01 TopoPath is "CHASSIS[1]/MGMT_MOD[1]". Description : Advanced Management Module Name : kant Width : 1 Component Role : Primary Component Type : MGMT MOD Insert Time : 28050132 Inventory: VPD ID: 288 (decimal) POS ID EXT: 0 (decimal) POS ID: 4 (decimal) Part Number: 39Y9659 FRU Number: 39Y9661 FRU Serial Number: YK11836CE2RC Manufacturer ID: IBM Hardware Revision: 4 (decimal) Manufacture Date: 50 (wk), 06 (yr) UUID: 1D95 9937 8CA5 11DB 9499 0014 5EDF 1C98 (hex) Type Code: 81 (decimal) Sub-type Code: 1 (decimal) IANA Num: 20301 (decimal) Product ID: 65 (decimal) Manufacturer Sub ID: ASUS Firmware data: Type : AMM firmware Build ID : BPET50P File Name : CNETCMUS.PKT Release Date : 03/26/2010 Release Level : 50 Revision - Major: 80 Port info: ======================================================== Topology Path ID : 1 Label : External Phy Orientation : EXTERNAL Port Number : 1 Type : MGT Physical Meidum : Copper Number of Link Intferfaces : 1 ------------------------------------ Link Ifc ID Number : 1 Link Ifc Transport Protocol : ENET Link Ifc Addr Type : MAC Link Ifc Burned-in Addr : xx:xx:xx:xx:xx:xx Link Ifc Admin Addr : 00:00:00:00:00:00 Link Ifc Addr in use : xx:xx:xx:xx:xx:xx ---------------------------------------------------------- Configuration behaviors: Save Only Enviroment data: -------------- Type: : TEMPERATURE Unit: : DEGREES_C Reading: : 38.00 Sensor Label: : MM Ambient Sensor ID: : 0x0 -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : +4.81 Sensor Label: : +5V Sensor ID: : 0x1b -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : +3.26 Sensor Label: : +3.3V Sensor ID: : 0x19 -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : +11.97 Sensor Label: : +12V Sensor ID: : 0x16 -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : -4.88 Sensor Label: : -5V Sensor ID: : 0x1e -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : +2.47 Sensor Label: : +2.5V Sensor ID: : 0x18 -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : +1.76 Sensor Label: : +1.8V Sensor ID: : 0x15 -------------- Type: : POWER_USAGE Unit: : WATTS Reading: : 0x19 Sensor Label: : kant Sensor ID: : 0x0

    Read the article

  • Stop squid caching 302 and 307 with deny_info

    - by 0xception
    TLDR: 302, 307 and Error pages are being cached. Need to force a refresh of the content. Long version: I've setup a very minimal squid instance running on a gateway which shouldn't not cache ANYTHING but needs to be solely used as a domain based web filter. I'm using another application which redirects un-authenticated users to the proxy which then uses the deny_info option redirects any non-whitelisted request to the login page. After the user has authenticated the firewall rule gets placed so they no longer get sent to the proxy. The problem is that when a user hits a website (xkcd.com) they are unauthenticated so they get redirected via the firewall: iptables -A unknown-user -t nat -p tcp --dport 80 -j REDIRECT --to-port 39135 to the proxy at this point squid redirects the user to the login page using a 302 (i've also tried 307, and i've also make sure the headers are set to no-cache and/or no-store for Cache-Control and Pragma). Then when the user logs into the system they get firewall rule which no longer directs them to the squid proxy. But if they go to xkcd.com again they will have the original redirection page cached and will once again get the login page. Any idea how to force these redirects to NOT be cached by the browser? Perhaps this is a problem w/ the browsers and not squid, but not sure how to get around it. Full squid config below. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl localnet src 192.168.182.0/23 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl https port 443 acl http port 80 acl CONNECT method CONNECT # # Disable Cache # cache deny all via off negative_ttl 0 seconds refresh_all_ims on #error_default_language en # Allow manager access only from localhost http_access allow manager localhost http_access deny manager # Deny access to anything other then http http_access deny !http # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !https visible_hostname gate.ovatn.net # Disable memory pooling memory_pools off # Never use neigh cache objects for cgi-bin scripts hierarchy_stoplist cgi-bin ? # # URL rewrite Test Settings # #acl whitelist dstdomain "/etc/squid/domains-pre.lst" #url_rewrite_program /usr/lib/squid/redirector #url_rewrite_access allow !whitelist #url_rewrite_children 5 startup=0 idle=1 concurrency=0 #http_access allow all # # Deny Info Error Test # acl whitelist dstdomain "/etc/squid/domains-pre.lst" deny_info http://login.domain.com/ whitelist #deny_info ERR_ACCESS_DENIED whitelist http_access deny !whitelist http_access allow whitelist http_port 39135 transparent ## Debug Values access_log /var/log/squid/access-pre.log cache_log /var/log/squid/cache-pre.log # Production Values #access_log /dev/null #cache_log /dev/null # Set PID file pid_filename /var/run/gatekeeper-pre.pid SOLUTION: I believe I might have found a solution to this. After days and days trying to figure it out, only through a random stumble I found client_persistent_connections off server_persistent_connections off This did the trick. So it wasn't so much cache as it was a single persistent connection messing things up. W000T!

    Read the article

  • configure squid3 to set up a web proxy in ubuntu12.04

    - by Gnijuohz
    I am in a LAN and have to use a proxy given to access the web in a very limited way. I can't even use google, github.com or SE sites. However I can use ssh to log into a server, which I have root access so basically I can do anything I want with it. So I was thinking that maybe I could use that server as a proxy so I can visit sites through it. I tested it using ssh -vT [email protected] which gave a proper response. And In my computer I can't do this. Also I tried downloading something from the gun.org using wget, which can't be done in my computer too. And it succeeded on that server. I don't know if that's enough to say that this server have full access to the Internet. But I assumed so and I installed squid3 on it. After trying some while, I failed to get it working. I got this after I run squid3 -k parse 2012/07/06 21:45:18| Processing Configuration File: /etc/squid3/squid.conf (depth 0) 2012/07/06 21:45:18| Processing: acl manager proto cache_object 2012/07/06 21:45:18| Processing: acl localhost src 127.0.0.1/32 ::1 2012/07/06 21:45:18| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 2012/07/06 21:45:18| Processing: acl localnet src 10.1.0.0/16 # RFC1918 possible internal network 2012/07/06 21:45:18| Processing: acl SSL_ports port 443 2012/07/06 21:45:18| Processing: acl Safe_ports port 80 # http 2012/07/06 21:45:18| Processing: acl Safe_ports port 21 # ftp 2012/07/06 21:45:18| Processing: acl Safe_ports port 443 # https 2012/07/06 21:45:18| Processing: acl Safe_ports port 70 # gopher 2012/07/06 21:45:18| Processing: acl Safe_ports port 210 # wais 2012/07/06 21:45:18| Processing: acl Safe_ports port 1025-65535 # unregistered ports 2012/07/06 21:45:18| Processing: acl Safe_ports port 280 # http-mgmt 2012/07/06 21:45:18| Processing: acl Safe_ports port 488 # gss-http 2012/07/06 21:45:18| Processing: acl Safe_ports port 591 # filemaker 2012/07/06 21:45:18| Processing: acl Safe_ports port 777 # multiling http 2012/07/06 21:45:18| Processing: acl CONNECT method CONNECT 2012/07/06 21:45:18| Processing: http_port 3128 transparent vhost vport 2012/07/06 21:45:18| Starting Authentication on port [::]:3128 2012/07/06 21:45:18| Disabling Authentication on port [::]:3128 (interception enabled) 2012/07/06 21:45:18| Disabling IPv6 on port [::]:3128 (interception enabled) 2012/07/06 21:45:18| Processing: cache_mem 1000 MB 2012/07/06 21:45:18| Processing: cache_swap_low 90 2012/07/06 21:45:18| Processing: coredump_dir /var/spool/squid3 2012/07/06 21:45:18| Processing: refresh_pattern ^ftp: 1440 20% 10080 2012/07/06 21:45:18| Processing: refresh_pattern ^gopher: 1440 0% 1440 2012/07/06 21:45:18| Processing: refresh_pattern -i (/cgi-bin/|?) 0 0% 0 2012/07/06 21:45:18| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 2012/07/06 21:45:18| Processing: refresh_pattern . 0 20% 4320 2012/07/06 21:45:18| Processing: ipcache_high 95 2012/07/06 21:45:18| Processing: http_access allow all I deleted some allow and deny rules and added http_access allow all so that all the request would be allowed. After configuring my computer, I got this error: Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect. And the log in the server showed that my TCP requests had all been denied. So, first of all, is what I am trying to do achievable? If so, how to configure the squid in the server so that I use it as a proxy to surf the Internet? My computer and the server both run Ubuntu11.04. Thanks for any help~

    Read the article

  • Squid Proxy: url_regex acl is not working?

    - by bharathi
    I am using squid proxy 3.1 in ubuntu machine. I want to allow only urls matching our pattern through our proxy server. I configured acl like below. Acl for dstdomain is working fine. If i access any url besides .zmedia.com , I got proxy connection refused. But the url_regex is not working. What i am trying here is. Allow only request from ".zmedia.com" domain and the request url should be in "/blog" context. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 ::1 acl urlwhitelist url_regex -i ^http(s)://([a-zA-Z]+).zmedia.com/blog/.*$ acl allowdomain dstdomain .zmedia.com acl Safe_ports port 80 8080 8500 7272 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl SSL_ports port 7272 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager http_access deny !allowdomain http_access allow urlwhitelist http_access allow CONNECT SSL_ports http_access deny CONNECT !SSL_ports # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 3128 # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/spool/squid 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid append_domain .zmedia.com # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 Please correct me , If i did anything wrong?

    Read the article

< Previous Page | 8 9 10 11 12 13 14  | Next Page >