Search Results

Search found 19221 results on 769 pages for 'custom forms'.

Page 709/769 | < Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >

  • What are incentives (if any) to use WinRT instead of .Net?

    - by Ark-kun
    Let's compare WinRT with .Net .Net .Net is the 13+ years evolution of COM. Three main parts of .Net are execution environment, standard libraries and supported languages. CLR is the native-code execution environment based on COM .Net Framework has a big set of standard libraries (implemented using managed and native code) that can be used from all .Net languages. There are .Net classes that allow using OS APIs. WPF or Silverlight provide a XAML-based UI framework .Net can be used with C++, C#, Javascript, Python, Ruby, VB, LISP, Scheme and many other languages. C++/.Net is a variation of the C++ language that allows interaction with .Net objects. .Net supports inheritance, generics, operator and method overloading and many other features. .Net allows creating apps that run on Windows (XP, 7, 8 Pro (Desktop and Metro), RT, CE, etc), Mac OS, Linux (+ other *nix); iOS, Android, Windows Phone (7, 8); Internet Explorer, Chrome, Firefox; XBox 360, Playstation Suite; raw microprocessors. There is support for creating games (2D/3D) using any managed language or C++. Created by Developer Division WinRT WinRT is based on COM. Three main parts of WinRT are execution environment, standard libraries and supported languages. WinRT has a native-code execution environment based on COM WinRT has a set of standard libraries that more or less can be used from WinRT languages. There are WinRT classes that allow using OS APIs. Unnamed Silverlight clone provides a XAML-based UI framework WinRT can be used with C++, C#, Javascript, VB. C++/CX is a variation of the C++ language that allows interaction with WinRT objects. Custom WinRT components don't support inheritance (classes must be sealed), generics, operator overloading and many other features. WinRT allows creating apps that run on Windows 8 Pro and RT (Metro only); Windows Phone 8 (limited). There is support for creating games (2D/3D) using C++ only. Ordered by Windows Team I think that all the aspects except the last ones are very important for developers. On the other hand it seems that the most important aspect for Microsoft is the last one. So, given the above comparison of conceptually identical technologies, what are incentives (if any) to use WinRT instead of .Net?

    Read the article

  • BIND DNS server (Windows) - Unable to access my local domain from other computers on LAN

    - by Ricardo Saraiva
    I have a BIND DNS server running on my Windows 7 development machine and I'm serving pages with WAMPSERVER. My ideia is to develop some tools (in PHP) for my intranet at work and I want them to be accessible via LAN in this format: http://tools.mycompany.com I've already placed BIND and I can access http://tools.mycompany.com on the machine that holds BIND server, but I cannot access it from other LAN computers. I've done the following on my router: defined static IP's for all LAN computers set Port Forwarding to my server (remember: it serves DNS and Web pages) set DNS server configuration to point to my LAN server On LAN computers, I went to Local Area Network properties and also changed the DNS server IP in order to point to my local DNS server. If it helps, here is my named.conf file: options { directory "c:\windows\SysWOW64\dns\etc"; forwarders {127.0.0.1; 8.8.8.8; 8.8.4.4;}; pid-file "run\named.pid"; allow-transfer { none; }; recursion no; }; logging{ channel my_log{ file "log\named.log" versions 3 size 2m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ my_log; }; }; zone "mycompany.com" IN { type master; file "zones\db.mycompany.com.txt"; allow-transfer { none; }; }; key "rndc-key" { algorithm hmac-md5; secret "qfApxn0NxXiaacFHpI86Rg=="; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; ...and a single zone I've defined - file db.mycompany.com.txt: $TTL 6h @ IN SOA tools.mycompany.com. hostmaster.mycompany.com. ( 2014042601 10800 3600 604800 86400 ) @ NS tools.mycompany.com. tools IN A 192.168.1.4 www IN A 192.168.1.4 On the file above 192.168.1.4 is the IP of the local machine inside my LAN. Can someone help me here? I need my web pages to be accessible from other computers inside my LAN using my custom domain name. I've tried on other computers and they can access my server via http://192.168.1.4/, but no able when using http://tools.mycompany.com . Please, consider the following: I'm completely new to BIND I have basic knowledge in Apache configuration Thanks a lot for your help.

    Read the article

  • TFS2010 Hangs “Waiting for Build Agent”

    - by Qpirate
    I have asked this question over on SO the link to the question is here but i am hoping this is a better place to ask it. I have 3 VM's each running the TFS Build Host Service 1 has 1 controller and 1 agent 2 have 2 Build Agents each. Most of the time (7\10 builds) it comes back with the following error message TF215097: An error occurred while initializing a build for build definition BUILD_DEFINITION: There was no endpoint listening at http://MACHINE1:9191/Build/v3.0/Services/Controller/14 that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. and there is no errors when i do get this message. the following is the config file that i have created <configuration> <appSettings> <add key="traceWriter" value="true"/> </appSettings> <system.diagnostics> <switches> <add name="BuildServiceTraceLevel" value="4"/> <add name="API" value="4"/> <add name="Authentication" value="4"/> <add name="Authorization" value="4"/> <add name="Database" value="4"/> <add name="General" value="4"/> <add name="traceLevel" value="4"/> </switches> <trace autoflush="true" indentsize="4"> <listeners> <add name="myListener" type="Microsoft.TeamFoundation.TeamFoundationTextWriterTraceListener,Microsoft.TeamFoundation.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" initializeData="c:\logs\TFSBuildServiceHost.exe.log" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> </configuration> I do have my own custom activities in my build process but this does not seem to be a problem as sometimes the build actually does go. I have tried refreshing the template as some sites suggest. Has anyone come across a solution for this problem? or can anyone tell me how to catch these errors when they happen?

    Read the article

  • Monitoring tools that can take high rate and high volume?

    - by Jon Watte
    We're using Cacti with RRDTool to monitor and graph about 100,000 counters spread across about 1,000 Linux-based nodes. However, our current setup generally only gives us 5-minute graphs (with some data being minute-based); we often make changes where seeing feedback in "near real time" would be of value. I'd like approximately a week of 5- or 10-second data, a year of 1-minute data, and 5 years of 10-minute data. I have SSD disks and a dual-hexa-core server to spare. I tried setting up a Graphite/carbon/whisper server, and had about 15 nodes pipe to it, but it only has "average" for the retention function when promoting to older buckets. This is almost useless -- I'd like min, max, average, standard deviation, and perhaps "total sum" and "number of samples" or perhaps "95th percentile" available. The developer claims there's a new back-end "in beta" that allows you to write your own function, but this appears to still only do 1:1 retention (when saving older data, you really want the statistics calculated into many streams from a single input. Also, "in beta" seems a little risky for this installation. If I'm wrong about this assumption, I'd be happy to be shown my error! I've heard Zabbix recommended, but it puts data into MySQL or some other SQL database. 100,000 counters on a 5 second interval means 20,000 tps, and while I have an SSD, I don't have an 8-way RAID-6 with battery backup cache, which I think I'd need for that to work out :-) Again, if that's actually something that's not a problem, I'd be happy to be shown the error of my ways. Also, can Zabbix do the single data stream - promote with statistics thing? Finally, Munin claims to have a new 2.0 coming out "in beta" right now, and it boasts custom retention plans. However, again, it's that "in beta" part -- has anyone used that for real, and at scale? How did it perform, if so? I'm almost thinking about using a graphing front-end (such as Graphite) and rolling my own retention backend with a simple layer on top of mmap() and some stats. That wouldn't be particularly hard, and would probably perform very well, letting the kernel figure out the balance between frequency of flushing to disk and process operations. Any other suggestions I should look into? Note: it has to have shown itself able to sustain the kinds of data loads I'm suggesting above; if you can point at the specific implementation you're referencing, so much the better!

    Read the article

  • VMware vSphere cluster design for site redundancy

    - by Stefan Radovanovici
    I have a question about the best design for site redudancy when using vSphere clusters. A bit of background info about our situation first though. We are a medium-sized company with two main offices, located in different countries. Our networks are linked by a Layer2 150Mbps leased line which is currently underused. We have a variety of services running for internal use within the company, some on physycal servers and some on existing vSphere clusters. In our department we also run several services (almost all running under various forms of Linux) like NTP, Syslog, jump servers, monitoring servers and so on. We have now the requirement that those servers need to be redundant within each location (which they are not at the moment) and also site redudant (which they are to some extent, the servers are duplicated in the 2nd location with configurations kept in sync via various methods at the application layer). There is no SAN available for us, at least not something that we can use at the moment. Cost is also an issue. While we do have some budget available for this, we can't afford to buy SANs for both locations for example. I looked at the VSA feature and it seems that this could be something for us but I am unsure how to solve the site-redudancy requirement. At the moment for testing purposes I am setting up in a lab a vSphere 5 with VSA on two ESXi hosts. I am currently using the Essentials Plus kit with VSA license, which allows me to build a VSA cluster on up to 3 hosts, together with a vCenter license to manage them. The hosts each have two dual-port network cards and two 600GB drives, running in Raid1. Hardware-wise this will be enough for us to run the all the services we need as VMs and will provide redundandcy within the site. At the moment I see only two option to have site redundancy: build an identical VSA cluter in the second location and keep the various services sync'ed at application layer (database sync, rsync and so on). simply move one of the hosts from the existing cluster to the second location, basically having the VSA cluster span the 150Mbps link between the sites. I would very much prefer the second option but I am unsure how well it'll work, if it can work at all. Technically it should, we can span the needed VLANs across the leased line and have them available in the second location. The advantage would be that we don't need to worry at all about sync'ing databases and the like. But I have the feeling that the bandwidth will not be enough, I have no way of knowing how much traffic will the VSA cluster generate between the hosts. I realize that this will most likely depend on the individual usage of the VMs but still, I have no idea how VSA replicates data between the ESXi hosts. Are these my only options or can my goals be achieved in some other way ? Is there perhaps a way to have some sort of "cold stand by" cluster in the second location where the VMs would be sync'ed once per night from the main location ? The idea is that in case the first site becomes unavailable, we would be able to bring all those VMs online there. We would be ok with the data being 1 day old. Any answers are appreciated. Best regards, Stefan

    Read the article

  • Diff 2 files while ignoring parts of lines

    - by Millianz
    I would like to diff a file system. Currently my bash script prints out the file system recursively into a file (ls -l -R) and diffs it with an expected output. An example for a line in this file would be: drw---- 100000f3 00000400 0 ./foo/ My current diff command is diff "$TEMP_LOG" "$DIFF_FILE_OUT" --strip-trailing-cr --changed-group-format='%' --unchanged-group-format='' "$SubLog" As you can see I ignore additional lines in the current output file, I only care about lines that match with the master output. I now have the problem though that some files may differ in size, or a folder might even have a different name, but due to it's location I know what access rights it should have. For example: Output: ------- 00000000 00000000 528 ./foo/bar.txt Master: ------- 00000000 00000000 200 ./foo/bar.txt Only the size differs here, and it doesn't matter, I would like to just ignore certain parts of the diff, kind of like an ansi c comment. Master: ------- 00000000 00000000 /*200*/ ./foo/bar.txt -- OR -- Master: d------ 00000000 00000000 /*10*/ ./foo//*123123*///*76456546*//bar.txt Output: d------ 00000000 00000000 0 ./foo/asd/sdf/bar.txt And still have it diff correctly. Is this even possible with diff, or will I have to write a custom script for it? Since I'm fairly new to cygwin I might be using the completely wrong tool all together, I'm happy for any suggestions. Update: Taking a step back, here is the general task at hand that I want to achieve. I want to write a script that checks the file system to see if the read/write permissions are set up correctly. The structure of the file system is under my control, so I don't have to worry about it changing too much. Sometimes folders/files might not be present, but if they are their permissions must be checked. For Example assume that the following is a snapshot of the current file system structure drw ./foo drw ./foo/bar -rw ./foow/bar/bar.txt drw ./foo/baz -rw ./foo/baz/baz.txt And this is what the file system structure might dictate, i.e. if these folders / files are present, the permissions must match. drw ./foo drw ./foo/bar -rw ./foo/bar/bar.txt --- ./foo/bar/foobar.txt drw ./foo/baz -rw ./foo/baz/foobaz.txt In this case the file system checked out ok, since all files present match their expected values. The situation becomes more complicated as soon as certain folders might have any arbitrary name, only due to their location I know what their permissions should be. Assume that the directory ./foo/bar in the above example might be such a case, i.e. instead of bar the folder could have any name, but still match the -rw permissions. This seems like a very complicated situation, and I'm not even sure if I can solve it with bash scripting alone. I might have to write an actual application.

    Read the article

  • How to encrypt dual boot windows 7 and xp (bitlocker, truecrypt combo?) on sdd (recommended?)

    - by therobyouknow
    I would like to setup a dual boot Windows 7 and Windows XP laptop/notebook computer where each operation system's partition is fully encrypted. I would like to do this on a SSD - a 128Gb Crucial M4. My research Dual boot of truecrypt encrypted OSs on one drive (not possible - in Truecript 7.x at time of writing) This cannot be done on a standard Truecrypt setup - it will only support encrypting one of the operating systems. I have tried this and also read about it here on superuser.com However, I did see a solution here that uses grub4dos as the initial bootloader to chain to separate truecrypt encrypted OSs, in my case Windows 7 and Windows XP: http://yyzyyz.blogspot.co.uk/2010/06/truecrypt-how-to-encrypt-multiple.html I am not going to consider this solution as it relies upon some custom code for use in the bootloader that is provided by the author. I would prefer a solution that can be fully understood so that I can be sure that there is nothing undesirable occuring (i.e. malware or just simply bugs in the code). I would like to believe such a solution doesn't have those risks but I can't be sure. BitLocker and Truecrypt combination - possible solution? So I am now considering a combination of encryption programs: I now aim to encrypt Windows XP with Truecrypt and Windows 7 with BitLocker. Assuming Truecrypt bootloader can boot into non-Truecrypt OSs (e.g. via hitting Escape to go to another menu), then this solution may be viable. SSDs and Encryption (use fastest possible spinning hard disk instead (?)) I read on various superuser.com posts and elsewhere that current SSDs are not suited to whole drive encryption for various reasons: impact of performance algorithms that give SSDs advantage over spinning harddisks. Algorithms used in compression of data for example. Wear on the SSD, shortening its life Security issues whereby data is repeated, as indicated in some Truecrypt documentation So I am now considering not using SSD. But with the aim to have the fastest drive possible, I am considering using the Western Digital Scorpion black 2.5" 7200rpm harddisk as this appears to be top rated among spinning platter-based harddrives (don't work for Western Digital). Summary So to achieve whole drive encrypted dual boot Windows 7 and Windows XP with minimal performance impact I intend to use a combination of Truecrypt and Bitlocker on a top-rated conventional spinning platter-based harddisk. Questions Will my summary: achieve whole disk encryption of the dual-boot Windows XP, Windows 7? OR an you suggest a simpler solution, including one that only requires only Truecrypt (BitLocker not available on XP). Or another encryption tool, including paid-for? provide the highest performance. Am I correct to avoid using SDD with encryption for the reasons I discovered? Are the concerns about SSDs and encryption still very real (some articles I read go back to 2010) Thanks for your input!

    Read the article

  • Google analytics and multiple independent subdomains

    - by MTilsted
    I need some help trying to setup google analytics correct. Here is my setup: We host sites for multiple customers, and each customer have their own subdomain on our site. So we have customerA.oursite.com and customerB.oursite.com As we add more customers we get more subdomains. We do want to track all data for each customer independent, but I don't want to to create a new google tracking code for each new customer. So my plan is to track all visits with "oursite.com", and then I will create a filter in google Analytics to get data for each specific customer(All visits for a specific subdomain). Is this(One tracking code, and a subdomain filter) the right way to do it? To create a subdomain filter i add a new profile for each customer, and then add a custom filter saying include "Request URI" and fill in "CustomerDomain.oursite.com". Is this the correct way to do it? And a general question about filters: Is it really impossible to create a new filter by applying it to data in an existing profile? I would really like to just collect all the data in one "main" profile and then create subdomain filters as we need them. But it seems that google only apply filters to new incomming data, not existing data. Is this really true? The following is my tracking code. Is '_setDomainName','none' the right thing to do? <script type="text/javascript"> /* Tracking code for qrtown.com */ var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-11584298-10']); _gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • Apache returns 403 Forbidden for alternative port vhost

    - by Wesley
    I'm having an issue getting vhosts to work on Apache 2.2, Debian 6. I have two VirtualHosts, one on port 80 and one on port 8888. The port 80 one has been created automatically by DirectAdmin, the 8888 is a custom one. It's configuration is as follows. <VirtualHost *:8888 > DocumentRoot /home/user/public_html/development ServerName www.myserver.nl ServerAlias myserver.nl <Directory "/home/user/public_html/development"> Options +Indexes +FollowSymLinks +MultiViews AllowOverride All Order Allow,deny Allow from all </Directory> </VirtualHost> Of course I also have a NameVirtualHost *:8888 The port 80 DocumentRoot is /home/user/public_html/production, which is perfectly accessible and works like a charm. The port 8888 docroot of /home/user/public_html/development is 403 forbidden though. I have compared the permissions for both folders. They seem fine to me. drwxr-xr-x 2 root root 4096 Aug 17 16:14 development drwxr-xr-x 4 root root 4096 Aug 18 04:29 production Also, the index.php file which is supposed to display when accessing through port 8888, located in /development/: -rwxr-xr-x 1 root root 41 Aug 17 16:14 index.html I have looked at my error_log and found many of the following entries, only being added to the log file when accessing through port 8888. [Sat Aug 18 04:35:09 2012] [error] [client 27.32.156.232] Symbolic link not allowed or link target not accessible: /home/user/public_html /home/user/public_html is a symbolic link that refers to /home/user/domains/mydomain/public_html. The symbolic link has the following permissions: lrwxrwxrwx 1 admin admin 29 Aug 17 15:56 public_html -> ./domains/mydomain/public_html I'm at a loss. It seems that everything is readable or executable. I've set the Directory to FollowSymLinks in the httpd.conf file, but that doesn't seem to make a difference. If I change that directory tag to <Directory "/home/admin/public_html"> (so it has FollowSymLinks on that as well) it still does not work. Any help is greatly appreciated. If I need to post more information, let me know. I'm pretty much a beginner at this stuff. .. .. UPDATE: I ended up changing the configuration to directly go to the actual path of the files, avoiding the public_html symlink altogether. That worked. Thanks for the suggestions folks. DocumentRoot /home/user/domains/mydomain/public_html/development instead of DocumentRoot /home/user/public_html/development

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • VPN Connection Causes Internal LAN Connection Loss with Server

    - by sleepisfortheweak
    I've tried configuring basic PPTP VPN at my small business using a number of different tutorials. As far as I can tell, the actual VPN connection worked fine, but upon connecting a client, the Server 'disappears' from the internal LAN. The RRAS service must be stopped before the connection is restored. My Setup: The network is simply a DSL Gateway/Router to the outside functioning as NAT/Firewall/DHCP. The server is a Win Server 2008 machine at fixed IP 192.168.1.200. The server has 1 NIC, so I used the 'custom' option when configuring RRAS. The RRAS settings should be default except that I've disabled ports for connection types I'm not using and reduced PPTP ports to 10. I've also created an address pool and disabled DHCP packet forwarding. The server only functions as a File Share and now a VPN Server. Local LAN computers all have mapped network shares to the server authenticated based on Local User/Group setup on the server. The Problem: The moment a client connects through VPN, the server 'disappears' from the local network. All mapped drives disconnect and there is no response to a ping 192.168.1.200. Even if the client disconnects, the server does not re-appear at that address until the RRAS service is stopped. I've Tried: Using an Address Pool inside and outside the local subnet. Using DCHP Relay Checking Inbound/Outbound filters (none enabled) The fact that nothing I've tried has had any effect, and that I can connect and successfully obtain an IP tells me that it's something more fundamental I'm missing. My gut tells me that it's something to do with the second IP address added by the VPN client somehow taking over the interface or traffic from the local LAN accidently getting routed to the VPN client instead of handled at the server once RRAS has become 'active' when a client connects. Hopefully this may be obvious to someone with real IT experience. I've been doing this a while and almost never been stumped. I'm starting to think it might actually be something tricky since my setup is pretty basic yet refuses to work. I'll be happy to include more info if this doesn't ring any bells right away for anyone. Thanks

    Read the article

  • How should we serve files in a small bioinformatics cluster?

    - by cespinoza
    We have a small cluster of six ubuntu servers. We run bioinformatics analyses on these clusters. Each analysis takes about 24 hours to complete, each core i7 server can handle 2 at a time, takes as input about 5GB data and outputs about 10-25GB of data. We run dozens of these a week. The software is a hodgepodge of custom perl scripts and 3rd party sequence alignment software written in C/C++. Currently, files are served from two of the compute nodes (yes, we're using compute nodes as file servers)-- each node has 5 1TB sata drives mounted separately (no raid) and is pooled via glusterfs 2.0.1. They each have as 3 bonded intel ethernet pci gigabit ethernet cards, attached to a d-link DGS-1224T switch ($300 24 port consumer-level). We are not currently using jumbo frames (not sure why, actually). The two file-serving compute nodes are then mirrored via glusterfs. Each of the four other nodes mounts the files via glusterfs. The files are all large (4gb+), and are stored as bare files (no database/etc) if that matters. As you can imagine, this is a bit of a mess that grew organically without forethought and we want to improve it now that we're running out of space. Our analyses are I/O intensive and it is a bottle neck-- we're only getting 140mB/sec between the two fileservers, maybe 50mb/sec from the clients (which only have single NICs). We have a flexible budget which I can probably get up $5k or so. How should we spend our budget? We need at least 10TB of storage fast enough to serve all nodes. How fast/big does the cpu/memory of such a file server have to be? Should we use NFS, ATA over Ethernet, iSCSI, Glusterfs, or something else? Should we buy two or more servers and create some sort of storage cluster, or is 1 server enough for such a small number of nodes? Should we invest in faster NICs (say, PCI-express cards with multiple connectors)? The switch? Should we use raid, if so, hardware or software? and which raid (5, 6, 10, etc)? Any ideas appreciated. We're biologists, not IT gurus.

    Read the article

  • How to flip video feed that's presented upside down?

    - by Zuul
    Skype an other applications running under windows 7 Ultimate are presenting the video captured from the laptop built-in webcam upside down. I've tried many solution that I was able to find regarding issues like this, but to no avail. Some of the most relevant are discussed here: From Skype Support Network, the thread why is my video image of myself upside-down??? From ASUSTek Forums, the thread Built-in camera upside down Both present several potential solutions to this issue, but I've been unable to fix it for the laptop ASUS U6S. What I've already tried: Changing Drivers The driver that works must be the one from Windows, all others available from ASUS drivers either don't install or install but the webcam doesn't provide any video feed. This disallows all options that concern using an older driver or editing the .inf file as to manually adjust the settings. ASUS does not provide drivers for Windows 7, so I've used drivers from Windows Vista 32 Bit. Using the application manycam This application actually solves the issue (temporarily), but creates new ones: If I use the application to flip the video feed, Skype video call cease to work. This application doesn't save the settings, at least I wasn't able to find any way to save the settings I've used to flip the video feed. A computer restart brings all back to how it was, video feed upside down and if the application is still installed, Skype continues to fail on video calls. Regedit I've searched thru Windows Registry Editor as to find any reference to the webcam settings, hopping to find a key with the Flip parameter, since it's up to the driver to flip the image (by what I could ascertain from this problem). Couldn't find any reference to such settings, either they actually don't exist within the Windows Registry or they use some weird name that I could think off. System Configuration I was able to access the webcam system settings from the Windows Device Manager, but the tab that actually has the Image Rotation setting is always disabled. The same goes for the settings available from the Skype webcam options (that essentially is presenting the same settings as Windows Device Manager, just within a custom Skype pop-up). Question: How can I flip the video feed from the laptop's built-in webcam, as to properly see and broadcast the video?

    Read the article

  • How does an NTP host switch among the various modes?

    - by James A. Rosen
    The NTPv3 RFC describes five operating modes: Symmetric Active (1): A host operating in this mode sends periodic messages regardless of the reachability state or stratum of its peer. By operating in this mode the host announces its willingness to synchronize and be synchronized by the peer. Symmetric Passive (2): This type of association is ordinarily created upon arrival of a message from a peer operating in the symmetric active mode and persists only as long as the peer is reachable and operating at a stratum level less than or equal to the host; otherwise, the association is dissolved. However, the association will always persist until at least one message has been sent in reply. By operating in this mode the host announces its willingness to synchronize and be synchronized by the peer. Client (3): A host operating in this mode sends periodic messages regardless of the reachability state or stratum of its peer. By operating in this mode the host, usually a LAN workstation, announces its willingness to be synchronized by, but not to synchronize the peer. Server (4): This type of association is ordinarily created upon arrival of a client request message and exists only in order to reply to that request, after which the association is dissolved. By operating in this mode the host, usually a LAN time server, announces its willingness to synchronize, but not to be synchronized by the peer. Broadcast (5): A host operating in this mode sends periodic messages regardless of the reachability state or stratum of the peers. By operating in this mode the host, usually a LAN time server operating on a high-speed broadcast medium, announces its willingness to synchronize all of the peers, but not to be synchronized by any of them. It seems to me, though, that any host except a leaf node would probably be in several modes. For example, I might have a local area network with three NTP servers, each in Symmetric Active (1) mode with respect to one another. They would also each be clients (3) of one of the many public stratum two time servers. Lastly, they would all server as servers (4) to the many local clients. Is the point that they're only in a given mode for a moment during the synchronization? If so, how does a host know to switch? I'm only looking for enough depth here to discuss the issue in an educated manner, not to write a custom time server.

    Read the article

  • Why would a process monitoring script use exit 1; on finding no problems?

    - by user568458
    General question: On a Linux (Centos) server, if a process monitoring script run by cron is set to close with exit 1; rather than exit 0; on finding that everything is okay and that no action is needed, is that a mistake? Or are there legitimate reasons for calling exit 1; instead of exit 0; on the "Everything's fine, no action needed" condition? exit 0; on finding no problems seems to me to be more appropriate. But maybe there's something I'm not aware of. For example, maybe there's something specific to Cron? Or maybe there's a convention in process monitoring scripts that 'failure' means 'this script failed to need to fix a problem' (rather than what I would expect which is that exit 1; would mean 'the process being monitored has failed'?) My specific case: I'm looking at a process monitoring script written by my web hosting company. By process monitoring script, I mean a script executed by Cron on a regular basis that checks if an important system process is running, and if it isn't running, takes actions such as mailing an administrator or restarting the process. Here's the (generalised) structure of their script, for a service running on port 8080 (in this case, Apache Tomcat): SERVICE=$(/usr/sbin/lsof -i tcp:8080 | wc -l); if [ $SERVICE != 0 ]; then exit 1; else #take action fi Seems simple enough even for someone with limited knowledge like me, except the exit 1; part seems odd. As I understand it, exit 0; closes a program and signifies to the parent that executed the program that everything is fine, exit n; where n0 and n<127 signifies that there has been some kind of error or problem. Here, their script seems to go against that rule - it calls exit 1; in the condition where everything is fine, and doesn't exit after taking remedial action in the problem condition. To me, this looks like a mistake - but my experience in this area is limited. Are there cases where calling exit 1; in the "Everything's fine, no action needed" condition is more appropriate than calling exit 0;? Or is it a mistake? Wider context is pretty simple. It's a Centos VPS, running Plesk. The script is being called by Cron via Plesk's "Scheduled tasks" Cron manager. There's no custom layer between Cron and this script that would respond in an unusual way to the exit call. It's a fairly average, almost out-of-the box Plesk-managed Centos VPS (in so far as there is such a thing). The process being monitored by this script is Apache Tomcat.

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • Remote Debian System Preventing Logon

    - by choobablue
    I have a dozen or so single board computers on a network running Debian (squeeze) and access them via ssh (ssh server is dropbear). To give an idea of the hardware of these computers they're 1.2 GHz x86 processors, 1GB of RAM and 4GB flash drives formatted as ext2 (I avoided ext3 to prevent the added flash write stress from journaling), there is also a swap partition on the drive. Normally the setup I'm using works great and I can access all the computers. Every once in a while one will prevent access. What happens is I try to connect via ssh (putty) and it gives me the login prompt, I enter the username and password and it responds 'Access Denied' and it will also refuse any public key in ~/.ssh/authorized_keys. The credentials are correct as they worked previously. The computer responds to pings and putty recognizes the server public key, which implies to me the system is still running. Restarting the server fixes the problem and I can log in again. (I tried a temporary fix of putting shutdown -r now in the root crontab but this doesn't seem to reliably be run once the hang happens) Once I restart however there doesn't seem to be any information in any of the system logs to indicate what happened, the logs are simply empty for that time period, as if the system had crashed. There is some custom software running on the system which appears to stop working (which is why I wanted to ssh to begin with). I'm assuming that this program is the source of the problems but I'm unsure of how it would cause it and how to debug what is happening. The most likely explanation I can think of is that there is a memory leak in the other program that then prevents dropbear from spawning a new login shell (and crontab from executing shutdown) as there is not enough free memory. But looking at memory usage of the other (working) computers there doesn't seem to be any meaningful increase in memory to indicate a leak (unless it's a very big, fast acting and rare leak). I would think that when the OS ran out of memory it would restart the system or kill processes (the Linux kernel restarts right?). The other thing I wonder about is if the fact that they are running off a flash drive could have some effect, especially the swap partition (which I think I should remove to prevent wear of the flash), but the flash drives are young (~1 month) and I don't think that wear would be a factor yet. Does anybody have an idea of what could cause these symptoms, if it could be done by a memory leak, or something else I haven't thought of. And does anybody know of a method to try to debug the problem and find out more information about what's going wrong?

    Read the article

  • Eclipse wont open Android Xml files

    - by mike
    I'm just starting with Android and everything seems to be working fine, but when I try to look at any XML file in eclipse, I get the following error. The only way I can see them is by "Opening With" - TextFile. org.eclipse.core.runtime.CoreException: Error opening the Android XML editor. Is the document an XML file? at com.android.ide.eclipse.adt.internal.editors.AndroidEditor.createTextEditor(Unknown Source) at com.android.ide.eclipse.adt.internal.editors.AndroidEditor.createAndroidPages(Unknown Source) at com.android.ide.eclipse.adt.internal.editors.AndroidEditor.addPages(Unknown Source) at org.eclipse.ui.forms.editor.FormEditor.createPages(FormEditor.java:138) at org.eclipse.ui.part.MultiPageEditorPart.createPartControl(MultiPageEditorPart.java:357) at org.eclipse.ui.internal.EditorReference.createPartHelper(EditorReference.java:662) at org.eclipse.ui.internal.EditorReference.createPart(EditorReference.java:462) at org.eclipse.ui.internal.WorkbenchPartReference.getPart(WorkbenchPartReference.java:595) at org.eclipse.ui.internal.EditorReference.getEditor(EditorReference.java:286) at org.eclipse.ui.internal.WorkbenchPage.busyOpenEditorBatched(WorkbenchPage.java:2857) at org.eclipse.ui.internal.WorkbenchPage.busyOpenEditor(WorkbenchPage.java:2762) at org.eclipse.ui.internal.WorkbenchPage.access$11(WorkbenchPage.java:2754) at org.eclipse.ui.internal.WorkbenchPage$10.run(WorkbenchPage.java:2705) at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70) at org.eclipse.ui.internal.WorkbenchPage.openEditor(WorkbenchPage.java:2701) at org.eclipse.ui.internal.WorkbenchPage.openEditor(WorkbenchPage.java:2685) at org.eclipse.ui.internal.WorkbenchPage.openEditor(WorkbenchPage.java:2676) at org.eclipse.ui.ide.IDE.openEditor(IDE.java:651) at org.eclipse.ui.ide.IDE.openEditor(IDE.java:610) at org.eclipse.jdt.internal.ui.javaeditor.EditorUtility.openInEditor(EditorUtility.java:361) at org.eclipse.jdt.internal.ui.javaeditor.EditorUtility.openInEditor(EditorUtility.java:168) at org.eclipse.jdt.ui.actions.OpenAction.run(OpenAction.java:229) at org.eclipse.jdt.ui.actions.OpenAction.run(OpenAction.java:208) at org.eclipse.jdt.ui.actions.SelectionDispatchAction.dispatchRun(SelectionDispatchAction.java:274) at org.eclipse.jdt.ui.actions.SelectionDispatchAction.run(SelectionDispatchAction.java:250) at org.eclipse.jdt.internal.ui.packageview.PackageExplorerActionGroup.handleOpen(PackageExplorerActionGroup.java:373) at org.eclipse.jdt.internal.ui.packageview.PackageExplorerPart$4.open(PackageExplorerPart.java:526) at org.eclipse.ui.OpenAndLinkWithEditorHelper$InternalListener.open(OpenAndLinkWithEditorHelper.java:48) at org.eclipse.jface.viewers.StructuredViewer$2.run(StructuredViewer.java:842) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.runtime.Platform.run(Platform.java:888) at org.eclipse.ui.internal.JFaceUtil$1.run(JFaceUtil.java:48) at org.eclipse.jface.util.SafeRunnable.run(SafeRunnable.java:175) at org.eclipse.jface.viewers.StructuredViewer.fireOpen(StructuredViewer.java:840) at org.eclipse.jface.viewers.StructuredViewer.handleOpen(StructuredViewer.java:1101) at org.eclipse.jface.viewers.StructuredViewer$6.handleOpen(StructuredViewer.java:1205) at org.eclipse.jface.util.OpenStrategy.fireOpenEvent(OpenStrategy.java:264) at org.eclipse.jface.util.OpenStrategy.access$2(OpenStrategy.java:258) at org.eclipse.jface.util.OpenStrategy$1.handleEvent(OpenStrategy.java:298) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1003) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3880) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3473) at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:2405) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2369) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2221) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:500) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:493) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:113) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:194) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:368) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    Read the article

  • How to get attribute value using SelectSingleNode?

    - by Nano HE
    I am parsing a xml document, I need find out the gid (an attribute) value (3810). Based on SelectSingleNode(). I found it is not easy to find the attribute name and it's value. Can I use this method or I must switch to other way. Attached my code. How can I use book obj to get the attribute value3810 for gid. Thank you. My test.xml file as below <?xml version="1.0" ?> <root> <VersionInfo date="2007-11-28" version="1.0.0.2" /> <Attributes> <AttrDir name="EFEM" DirID="1"> <AttrDir name="Aligner" DirID="2"> <AttrDir name="SequenceID" DirID="3"> <AttrObj text="Slot01" gid="3810" unit="" scale="1" /> <AttrObjCount value="1" /> </AttrDir> </AttrDir> </AttrDir> </Attributes> </root> I wrote the test.cs as below public class Sample { public static void Main() { XmlDocument doc = new XmlDocument(); doc.Load("test.xml"); XmlNode book; XmlNode root = doc.DocumentElement; book = root.SelectSingleNode("Attributes[AttrDir[@name='EFEM']/AttrDir[@name='Aligner']/AttrDir[@name='SequenceID']/AttrObj[@text='Slot01']]"); Console.WriteLine("Display the modified XML document...."); doc.Save(Console.Out); } } [Update 06/10/2010] The xml file is a complex file. Included thousands of gids. But for each of Xpath, the gid is unique. I load the xml file to a TreeView control. this.treeView1.AfterSelect += new System.Windows.Forms.TreeViewEventHandler(this.treeView1_AfterSelect);. When treeView1_AfterSelect event occurred, the e.Node.FullPath will return as a String Value. I parse the string Value e.Node.FullPath. Then I got the member of XPath Above. Then I tried to find which gid item was selected. I need find the gid value as a return value indeed.

    Read the article

  • Troubleshooting .NET "Fatal Execution Engine Error"

    - by JYelton
    Summary: I periodically get a .NET Fatal Execution Engine Error on an application which I cannot seem to debug. The dialog that comes up only offers to close the program or send information about the error to Microsoft. I've tried looking at the more detailed information but I don't know how to make use of it. Error: The error is visible in Event Viewer under Applications and is as follows: .NET Runtime version 2.0.50727.3607 - Fatal Execution Engine Error (7A09795E) (80131506) The computer running it is Windows XP Professional SP 3. (Intel Core2Quad Q6600 2.4GHz w/ 2.0 GB of RAM) Other .NET-based projects that lack multi-threaded downloading (see below) seem to run just fine. Application: The application is written in C#/.NET 3.5 using VS2008, and installed via a setup project. The app is multi-threaded and downloads data from multiple web servers using System.Net.HttpWebRequest and its methods. I've determined that the .NET error has something to do with either threading or HttpWebRequest but I haven't been able to get any closer as this particular error seems impossible to debug. I've tried handling errors on many levels, including the following in Program.cs: // handle UI thread exceptions Application.ThreadException += new ThreadExceptionEventHandler(Application_ThreadException); // handle non-UI thread exceptions AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException); Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); // force all windows forms errors to go through our handler Application.SetUnhandledExceptionMode(UnhandledExceptionMode.CatchException); More Notes and What I've Tried... Installed Visual Studio 2008 on the target machine and tried running in debug mode, but the error still occurs, with no hint as to where in source code it occurred. When running the program from its installed version (Release) the error occurs more frequently, usually within minutes of launching the application. When running the program in debug mode inside of VS2008, it can run for hours or days before generating the error. Reinstalled .NET 3.5 and made sure all updates are applied. Broke random cubicle objects in frustration. Rewritten parts of code that deal with threading and downloading in attempts to catch and log exceptions, though logging seemed to aggravate the problem (and never provided any data). Question: What steps can I take to troubleshoot or debug this kind of error? Memory dumps and the like seem to be the next step, but I'm not experienced at interpreting them. Perhaps there's something more I can do in the code to try and catch errors... It would be nice if the "Fatal Execution Engine Error" was more informative, but internet searches have only told me that it's a common error for a lot of .NET-related items.

    Read the article

  • problem with google chrome

    - by user365559
    hi. i have javscript file for history management.IT is not supported by chrome when i am trying to navigate to back page with backbutton in the browser.I can see the url change but it doesnt go to preceeding page. BrowserHistoryUtils = { addEvent: function(elm, evType, fn, useCapture) { useCapture = useCapture || false; if (elm.addEventListener) { elm.addEventListener(evType, fn, useCapture); return true; } else if (elm.attachEvent) { var r = elm.attachEvent('on' + evType, fn); return r; } else { elm['on' + evType] = fn; } } } BrowserHistory = (function() { // type of browser var browser = { ie: false, firefox: false, safari: false, opera: false, version: -1 }; // if setDefaultURL has been called, our first clue // that the SWF is ready and listening //var swfReady = false; // the URL we'll send to the SWF once it is ready //var pendingURL = ''; // Default app state URL to use when no fragment ID present var defaultHash = ''; // Last-known app state URL var currentHref = document.location.href; // Initial URL (used only by IE) var initialHref = document.location.href; // Initial URL (used only by IE) var initialHash = document.location.hash; // History frame source URL prefix (used only by IE) var historyFrameSourcePrefix = 'history/historyFrame.html?'; // History maintenance (used only by Safari) var currentHistoryLength = -1; var historyHash = []; var initialState = createState(initialHref, initialHref + '#' + initialHash, initialHash); var backStack = []; var forwardStack = []; var currentObjectId = null; //UserAgent detection var useragent = navigator.userAgent.toLowerCase(); if (useragent.indexOf("opera") != -1) { browser.opera = true; } else if (useragent.indexOf("msie") != -1) { browser.ie = true; browser.version = parseFloat(useragent.substring(useragent.indexOf('msie') + 4)); } else if (useragent.indexOf("safari") != -1) { browser.safari = true; browser.version = parseFloat(useragent.substring(useragent.indexOf('safari') + 7)); } else if (useragent.indexOf("gecko") != -1) { browser.firefox = true; } if (browser.ie == true && browser.version == 7) { window["_ie_firstload"] = false; } // Accessor functions for obtaining specific elements of the page. function getHistoryFrame() { return document.getElementById('ie_historyFrame'); } function getAnchorElement() { return document.getElementById('firefox_anchorDiv'); } function getFormElement() { return document.getElementById('safari_formDiv'); } function getRememberElement() { return document.getElementById("safari_remember_field"); } // Get the Flash player object for performing ExternalInterface callbacks. // Updated for changes to SWFObject2. function getPlayer(id) { if (id && document.getElementById(id)) { var r = document.getElementById(id); if (typeof r.SetVariable != "undefined") { return r; } else { var o = r.getElementsByTagName("object"); var e = r.getElementsByTagName("embed"); if (o.length > 0 && typeof o[0].SetVariable != "undefined") { return o[0]; } else if (e.length > 0 && typeof e[0].SetVariable != "undefined") { return e[0]; } } } else { var o = document.getElementsByTagName("object"); var e = document.getElementsByTagName("embed"); if (e.length > 0 && typeof e[0].SetVariable != "undefined") { return e[0]; } else if (o.length > 0 && typeof o[0].SetVariable != "undefined") { return o[0]; } else if (o.length > 1 && typeof o[1].SetVariable != "undefined") { return o[1]; } } return undefined; } function getPlayers() { var players = []; if (players.length == 0) { var tmp = document.getElementsByTagName('object'); players = tmp; } if (players.length == 0 || players[0].object == null) { var tmp = document.getElementsByTagName('embed'); players = tmp; } return players; } function getIframeHash() { var doc = getHistoryFrame().contentWindow.document; var hash = String(doc.location.search); if (hash.length == 1 && hash.charAt(0) == "?") { hash = ""; } else if (hash.length >= 2 && hash.charAt(0) == "?") { hash = hash.substring(1); } return hash; } /* Get the current location hash excluding the '#' symbol. */ function getHash() { // It would be nice if we could use document.location.hash here, // but it's faulty sometimes. var idx = document.location.href.indexOf('#'); return (idx >= 0) ? document.location.href.substr(idx+1) : ''; } /* Get the current location hash excluding the '#' symbol. */ function setHash(hash) { // It would be nice if we could use document.location.hash here, // but it's faulty sometimes. if (hash == '') hash = '#' document.location.hash = hash; } function createState(baseUrl, newUrl, flexAppUrl) { return { 'baseUrl': baseUrl, 'newUrl': newUrl, 'flexAppUrl': flexAppUrl, 'title': null }; } /* Add a history entry to the browser. * baseUrl: the portion of the location prior to the '#' * newUrl: the entire new URL, including '#' and following fragment * flexAppUrl: the portion of the location following the '#' only */ function addHistoryEntry(baseUrl, newUrl, flexAppUrl) { //delete all the history entries forwardStack = []; if (browser.ie) { //Check to see if we are being asked to do a navigate for the first //history entry, and if so ignore, because it's coming from the creation //of the history iframe if (flexAppUrl == defaultHash && document.location.href == initialHref && window['_ie_firstload']) { currentHref = initialHref; return; } if ((!flexAppUrl || flexAppUrl == defaultHash) && window['_ie_firstload']) { newUrl = baseUrl + '#' + defaultHash; flexAppUrl = defaultHash; } else { // for IE, tell the history frame to go somewhere without a '#' // in order to get this entry into the browser history. getHistoryFrame().src = historyFrameSourcePrefix + flexAppUrl; } setHash(flexAppUrl); } else { //ADR if (backStack.length == 0 && initialState.flexAppUrl == flexAppUrl) { initialState = createState(baseUrl, newUrl, flexAppUrl); } else if(backStack.length > 0 && backStack[backStack.length - 1].flexAppUrl == flexAppUrl) { backStack[backStack.length - 1] = createState(baseUrl, newUrl, flexAppUrl); } if (browser.safari) { // for Safari, submit a form whose action points to the desired URL if (browser.version <= 419.3) { var file = window.location.pathname.toString(); file = file.substring(file.lastIndexOf("/")+1); getFormElement().innerHTML = '<form name="historyForm" action="'+file+'#' + flexAppUrl + '" method="GET"></form>'; //get the current elements and add them to the form var qs = window.location.search.substring(1); var qs_arr = qs.split("&"); for (var i = 0; i < qs_arr.length; i++) { var tmp = qs_arr[i].split("="); var elem = document.createElement("input"); elem.type = "hidden"; elem.name = tmp[0]; elem.value = tmp[1]; document.forms.historyForm.appendChild(elem); } document.forms.historyForm.submit(); } else { top.location.hash = flexAppUrl; } // We also have to maintain the history by hand for Safari historyHash[history.length] = flexAppUrl; _storeStates(); } else { // Otherwise, write an anchor into the page and tell the browser to go there addAnchor(flexAppUrl); setHash(flexAppUrl); } } backStack.push(createState(baseUrl, newUrl, flexAppUrl)); } function _storeStates() { if (browser.safari) { getRememberElement().value = historyHash.join(","); } } function handleBackButton() { //The "current" page is always at the top of the history stack. var current = backStack.pop(); if (!current) { return; } var last = backStack[backStack.length - 1]; if (!last && backStack.length == 0){ last = initialState; } forwardStack.push(current); } function handleForwardButton() { //summary: private method. Do not call this directly. var last = forwardStack.pop(); if (!last) { return; } backStack.push(last); } function handleArbitraryUrl() { //delete all the history entries forwardStack = []; } /* Called periodically to poll to see if we need to detect navigation that has occurred */ function checkForUrlChange() { if (browser.ie) { if (currentHref != document.location.href && currentHref + '#' != document.location.href) { //This occurs when the user has navigated to a specific URL //within the app, and didn't use browser back/forward //IE seems to have a bug where it stops updating the URL it //shows the end-user at this point, but programatically it //appears to be correct. Do a full app reload to get around //this issue. if (browser.version < 7) { currentHref = document.location.href; document.location.reload(); } else { if (getHash() != getIframeHash()) { // this.iframe.src = this.blankURL + hash; var sourceToSet = historyFrameSourcePrefix + getHash(); getHistoryFrame().src = sourceToSet; } } } } if (browser.safari) { // For Safari, we have to check to see if history.length changed. if (currentHistoryLength >= 0 && history.length != currentHistoryLength) { //alert("did change: " + history.length + ", " + historyHash.length + "|" + historyHash[history.length] + "|>" + historyHash.join("|")); // If it did change, then we have to look the old state up // in our hand-maintained array since document.location.hash // won't have changed, then call back into BrowserManager. currentHistoryLength = history.length; var flexAppUrl = historyHash[currentHistoryLength]; if (flexAppUrl == '') { //flexAppUrl = defaultHash; } //ADR: to fix multiple if (typeof BrowserHistory_multiple != "undefined" && BrowserHistory_multiple == true) { var pl = getPlayers(); for (var i = 0; i < pl.length; i++) { pl[i].browserURLChange(flexAppUrl); } } else { getPlayer().browserURLChange(flexAppUrl); } _storeStates(); } } if (browser.firefox) { if (currentHref != document.location.href) { var bsl = backStack.length; var urlActions = { back: false, forward: false, set: false } if ((window.location.hash == initialHash || window.location.href == initialHref) && (bsl == 1)) { urlActions.back = true; // FIXME: could this ever be a forward button? // we can't clear it because we still need to check for forwards. Ugg. // clearInterval(this.locationTimer); handleBackButton(); } // first check to see if we could have gone forward. We always halt on // a no-hash item. if (forwardStack.length > 0) { if (forwardStack[forwardStack.length-1].flexAppUrl == getHash()) { urlActions.forward = true; handleForwardButton(); } } // ok, that didn't work, try someplace back in the history stack if ((bsl >= 2) && (backStack[bsl - 2])) { if (backStack[bsl - 2].flexAppUrl == getHash()) { urlActions.back = true; handleBackButton(); } } if (!urlActions.back && !urlActions.forward) { var foundInStacks = { back: -1, forward: -1 } for (var i = 0; i < backStack.length; i++) { if (backStack[i].flexAppUrl == getHash() && i != (bsl - 2)) { arbitraryUrl = true; foundInStacks.back = i; } } for (var i = 0; i < forwardStack.length; i++) { if (forwardStack[i].flexAppUrl == getHash() && i != (bsl - 2)) { arbitraryUrl = true; foundInStacks.forward = i; } } handleArbitraryUrl(); } // Firefox changed; do a callback into BrowserManager to tell it. currentHref = document.location.href; var flexAppUrl = getHash(); if (flexAppUrl == '') { //flexAppUrl = defaultHash; } //ADR: to fix multiple if (typeof BrowserHistory_multiple != "undefined" && BrowserHistory_multiple == true) { var pl = getPlayers(); for (var i = 0; i < pl.length; i++) { pl[i].browserURLChange(flexAppUrl); } } else { getPlayer().browserURLChange(flexAppUrl); } } } //setTimeout(checkForUrlChange, 50); } /* Write an anchor into the page to legitimize it as a URL for Firefox et al. */ function addAnchor(flexAppUrl) { if (document.getElementsByName(flexAppUrl).length == 0) { getAnchorElement().innerHTML += "<a name='" + flexAppUrl + "'>" + flexAppUrl + "</a>"; } } var _initialize = function () { if (browser.ie) { var scripts = document.getElementsByTagName('script'); for (var i = 0, s; s = scripts[i]; i++) { if (s.src.indexOf("history.js") > -1) { var iframe_location = (new String(s.src)).replace("history.js", "historyFrame.html"); } } historyFrameSourcePrefix = iframe_location + "?"; var src = historyFrameSourcePrefix; var iframe = document.createElement("iframe"); iframe.id = 'ie_historyFrame'; iframe.name = 'ie_historyFrame'; //iframe.src = historyFrameSourcePrefix; try { document.body.appendChild(iframe); } catch(e) { setTimeout(function() { document.body.appendChild(iframe); }, 0); } } if (browser.safari) { var rememberDiv = document.createElement("div"); rememberDiv.id = 'safari_rememberDiv'; document.body.appendChild(rememberDiv); rememberDiv.innerHTML = '<input type="text" id="safari_remember_field" style="width: 500px;">'; var formDiv = document.createElement("div"); formDiv.id = 'safari_formDiv'; document.body.appendChild(formDiv); var reloader_content = document.createElement('div'); reloader_content.id = 'safarireloader'; var scripts = document.getElementsByTagName('script'); for (var i = 0, s; s = scripts[i]; i++) { if (s.src.indexOf("history.js") > -1) { html = (new String(s.src)).replace(".js", ".html"); } } reloader_content.innerHTML = '<iframe id="safarireloader-iframe" src="about:blank" frameborder="no" scrolling="no"></iframe>'; document.body.appendChild(reloader_content); reloader_content.style.position = 'absolute'; reloader_content.style.left = reloader_content.style.top = '-9999px'; iframe = reloader_content.getElementsByTagName('iframe')[0]; if (document.getElementById("safari_remember_field").value != "" ) { historyHash = document.getElementById("safari_remember_field").value.split(","); } } if (browser.firefox) { var anchorDiv = document.createElement("div"); anchorDiv.id = 'firefox_anchorDiv'; document.body.appendChild(anchorDiv); } //setTimeout(checkForUrlChange, 50); } return { historyHash: historyHash, backStack: function() { return backStack; }, forwardStack: function() { return forwardStack }, getPlayer: getPlayer, initialize: function(src) { _initialize(src); }, setURL: function(url) { document.location.href = url; }, getURL: function() { return document.location.href; }, getTitle: function() { return document.title; }, setTitle: function(title) { try { backStack[backStack.length - 1].title = title; } catch(e) { } //if on safari, set the title to be the empty string. if (browser.safari) { if (title == "") { try { var tmp = window.location.href.toString(); title = tmp.substring((tmp.lastIndexOf("/")+1), tmp.lastIndexOf("#")); } catch(e) { title = ""; } } } document.title = title; }, setDefaultURL: function(def) { defaultHash = def; def = getHash(); //trailing ? is important else an extra frame gets added to the history //when navigating back to the first page. Alternatively could check //in history frame navigation to compare # and ?. if (browser.ie) { window['_ie_firstload'] = true; var sourceToSet = historyFrameSourcePrefix + def; var func = function() { getHistoryFrame().src = sourceToSet; window.location.replace("#" + def); setInterval(checkForUrlChange, 50); } try { func(); } catch(e) { window.setTimeout(function() { func(); }, 0); } } if (browser.safari) { currentHistoryLength = history.length; if (historyHash.length == 0) { historyHash[currentHistoryLength] = def; var newloc = "#" + def; window.location.replace(newloc); } else { //alert(historyHash[historyHash.length-1]); } //setHash(def); setInterval(checkForUrlChange, 50); } if (browser.firefox || browser.opera) { var reg = new RegExp("#" + def + "$"); if (window.location.toString().match(reg)) { } else { var newloc ="#" + def; window.location.replace(newloc); } setInterval(checkForUrlChange, 50); //setHash(def); } }, /* Set the current browser URL; called from inside BrowserManager to propagate * the application state out to the container. */ setBrowserURL: function(flexAppUrl, objectId) { if (browser.ie && typeof objectId != "undefined") { currentObjectId = objectId; } //fromIframe = fromIframe || false; //fromFlex = fromFlex || false; //alert("setBrowserURL: " + flexAppUrl); //flexAppUrl = (flexAppUrl == "") ? defaultHash : flexAppUrl ; var pos = document.location.href.indexOf('#'); var baseUrl = pos != -1 ? document.location.href.substr(0, pos) : document.location.href; var newUrl = baseUrl + '#' + flexAppUrl; if (document.location.href != newUrl && document.location.href + '#' != newUrl) { currentHref = newUrl; addHistoryEntry(baseUrl, newUrl, flexAppUrl); currentHistoryLength = history.length; } return false; }, browserURLChange: function(flexAppUrl) { var objectId = null; if (browser.ie && currentObjectId != null) { objectId = currentObjectId; } pendingURL = ''; if (typeof BrowserHistory_multiple != "undefined" && BrowserHistory_multiple == true) { var pl = getPlayers(); for (var i = 0; i < pl.length; i++) { try { pl[i].browserURLChange(flexAppUrl); } catch(e) { } } } else { try { getPlayer(objectId).browserURLChange(flexAppUrl); } catch(e) { } } currentObjectId = null; } } })(); // Initialization // Automated unit testing and other diagnostics function setURL(url) { document.location.href = url; } function backButton() { history.back(); } function forwardButton() { history.forward(); } function goForwardOrBackInHistory(step) { history.go(step); } //BrowserHistoryUtils.addEvent(window, "load", function() { BrowserHistory.initialize(); }); (function(i) { var u =navigator.userAgent;var e=/*@cc_on!@*/false; var st = setTimeout; if(/webkit/i.test(u)){ st(function(){ var dr=document.readyState; if(dr=="loaded"||dr=="complete"){i()} else{st(arguments.callee,10);}},10); } else if((/mozilla/i.test(u)&&!/(compati)/.test(u)) || (/opera/i.test(u))){ document.addEventListener("DOMContentLoaded",i,false); } else if(e){ (function(){ var t=document.createElement('doc:rdy'); try{t.doScroll('left'); i();t=null; }catch(e){st(arguments.callee,0);}})(); } else{ window.onload=i; } })( function() {BrowserHistory.initialize();} );

    Read the article

  • MSDTC - Communication with the underlying transaction manager has failed (Firewall open, MSDTC netwo

    - by SocialAddict
    I'm having problems with my ASP.NET web forms system. It worked on our test server but now we are putting it live one of the servers is within a DMZ and the SQL server is outside of that (on our network still though - although a different subnet) I have open up the firewall completely between these two boxes to see if that was the issue and it still gives the error message "Communication with the underlying transaction manager has failed" whenever we try and use the "TransactionScope". We can access the data for retrieval it's just transactions that break it. We have also used msdtc ping to test the connection and with the amendments on the firewall that pings successfully, but the same error occurs! How do i resolve this error? Any help would be great as we have a system to go live today. Panic :) Edit: I have created a more straightforward test page with a transaction as below and this works fine. Could a nested transaction cause this kind of error and if so why would this only cause an issue when using a live box in a dmz with a firewall? AuditRepository auditRepository = new AuditRepository(); try { using (TransactionScope scope = new TransactionScope()) { auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#1", 1); auditRepository.Save(); auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#2", 1); auditRepository.Save(); scope.Complete(); } } catch (Exception ex) { Response.Write("Test Error For Transaction: " + ex.Message + "<br />" + ex.StackTrace); } This is the ErrorStack we are getting when the problem occurs: at System.Transactions.TransactionInterop.GetOletxTransactionFromTransmitterPropigationToken(Byte[] propagationToken) at System.Transactions.TransactionStatePSPEOperation.PSPEPromote(InternalTransaction tx) at System.Transactions.TransactionStateDelegatedBase.EnterState(InternalTransaction tx) at System.Transactions.EnlistableStates.Promote(InternalTransaction tx) at System.Transactions.Transaction.Promote() at System.Transactions.TransactionInterop.ConvertToOletxTransaction(Transaction transaction) at System.Transactions.TransactionInterop.GetExportCookie(Transaction transaction, Byte[] whereabouts) at System.Data.SqlClient.SqlInternalConnection.GetTransactionCookie(Transaction transaction, Byte[] whereAbouts) at System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction tx) at System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction tx) at System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) at System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at System.Data.Linq.SqlClient.SqlConnectionManager.UseConnection(IConnectionUser user) at System.Data.Linq.SqlClient.SqlProvider.get_IsSqlCe() at System.Data.Linq.SqlClient.SqlProvider.InitializeProviderMode() at System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) at System.Data.Linq.ChangeDirector.StandardChangeDirector.DynamicInsert(TrackedObject item) at System.Data.Linq.ChangeDirector.StandardChangeDirector.Insert(TrackedObject item) at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges() at RegBook.classes.DbBase.Save() at RegBook.usercontrols.BookingProcess.confirmBookingButton_Click(Object sender, EventArgs e)

    Read the article

  • Could not load file or assembly 'System.Web.Ajax, Version=3.0.31106.0

    - by Jonesy
    HI folks, I have a .net application (vb.net) and I'm using the ajax control toolkit. It works fine on my production machine but when I upload it to the host (fasthosts) i get this error: Could not load file or assembly 'System.Web.Ajax, Version=3.0.31106.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e' or one of its dependencies. The module was expected to contain an assembly manifest. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.BadImageFormatException: Could not load file or assembly 'System.Web.Ajax, Version=3.0.31106.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e' or one of its dependencies. The module was expected to contain an assembly manifest. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Assembly Load Trace: The following information can be helpful to determine why the assembly 'System.Web.Ajax, Version=3.0.31106.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e' could not be loaded. WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. Stack Trace: [BadImageFormatException: Could not load file or assembly 'System.Web.Ajax, Version=3.0.31106.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e' or one of its dependencies. The module was expected to contain an assembly manifest.] AjaxControlToolkit.ToolkitScriptManager.ApplyAssembly(ScriptReference script, Boolean isComposite) +0 AjaxControlToolkit.ToolkitScriptManager.OnResolveScriptReference(ScriptReferenceEventArgs e) +167 System.Web.UI.ScriptManager.RegisterScripts() +191 System.Web.UI.ScriptManager.OnPagePreRenderComplete(Object sender, EventArgs e) +113 System.Web.UI.Page.OnPreRenderComplete(EventArgs e) +8698462 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1029 Here is my web.conf file. Its very simple: <system.web> <customErrors mode="Off"/> <compilation debug="true"> <assemblies> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.Extensions.Design, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A"/> <add assembly="System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/></assemblies></compilation></system.web> Does anyone know whats up? -- Billy

    Read the article

  • Application Specific Paths for DLL Loading when DLL is loaded dynamically

    - by MartinHT
    Hi: I am building a program that uses a very simple plugin system. This is the code I'm using to load the possible plugins: public interface IPlugin { string Name { get; } string Description { get; } bool Execute(System.Windows.Forms.IWin32Window parent); } private void loadPlugins() { int idx = 0; string[] pluginFolders = getPluginFolders(); Array.ForEach(pluginFolders, folder => { string[] pluginFiles = getPluginFiles(folder); Array.ForEach(pluginFiles, file => { try { System.Reflection.Assembly assembly = System.Reflection.Assembly.LoadFile(file); Array.ForEach(assembly.GetTypes(), type => { if(type.GetInterface("PluginExecutor.IPlugin") != null) { IPlugin plugin = assembly.CreateInstance(type.ToString()) as IPlugin; if(plugin != null) lista.Add(new PluginItem(plugin.Name, plugin.Description, file, plugin)); } }); } catch(Exception) { } }); }); } When the user selects a particular plugin from the list, I launch the plugin's Execute method. So far, so good! As you can see the plugins are loaded from a folder, and within the folder are several dll's that are needed but the plugin. My problem is that I can't get the plugin to 'see' the dlls, it just searches the launching applications startup folder, but not the folder where the plugin was loaded from. I have tried several methods: 1. Changing the Current Directory to the plugins folder. 2. Using an inter-op call to SetDllDirectory 3. Adding an entry in the registry to point to a folder where I want it to look (see code below) None of these methods work. What am I missing? As I load the dll plugin dynamically, it does not seem to obey any of the above mentioned methods. What else can I try? Regards, MartinH. //HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths Microsoft.Win32.RegistryKey appPaths = Microsoft.Win32.Registry.LocalMachine.CreateSubKey( string.Format( @"SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\{0}", System.IO.Path.GetFileName(Application.ExecutablePath)), Microsoft.Win32.RegistryKeyPermissionCheck.ReadWriteSubTree); appPaths.SetValue(string.Empty, Application.ExecutablePath); object path = appPaths.GetValue("Path"); if(path == null) appPaths.SetValue("Path", System.IO.Path.GetDirectoryName(pluginItem.FileName)); else { string strPath = string.Format("{0};{1}", path, System.IO.Path.GetDirectoryName(pluginItem.FileName)); appPaths.SetValue("Path", strPath); } appPaths.Flush();

    Read the article

  • Using Wicked with Devise for a sign up wizard

    - by demondeac11
    I am using devise with wicked to create a sign up wizard, but I am unsure about a problem I am having creating profiles. After a user provides their email & password they are forwarded to a step to create a profile based on whether they have specified they are a shipper or carrier. However I am unsure what the code should be in the controller and the forms to generically create a profile. Here is the code I have for the application: The steps controller: class UserStepsController < ApplicationController include Wicked::Wizard steps :carrier_profile, :shipper_profile def create @user = User.last case step when :carrier_profile @profile = CarrierProfile.create!(:dot => params[:dot]) if @profile.save render_wizard @user else flash[:alert] = "Record not saved" end when :shipper_profile @profile = ShipperProfile.create!(params[:shipper_profile) if @profile.save render_wizard @user else flash[:alert] = "Record not saved" end end end end end def show @user = User.last @carrier_profile = CarrierProfile.new @shipper_profile = ShipperProfile.new case step when :carrier_profile skip_step if @user.shipper? when :shipper_profile skip_step if @user.carrier? end render_wizard end end The form for a carrier profile: <% form_for @carrier_profile , url: wizard_path, method: :post do |f| %> <div> <%= f.label :dot, "Please enter your DOT Number:" %> <%= f.text_field :dot %> </div> <%= f.submit "Next Step", class: "btn btn-primary" %> <% end %> The form for a shipper profile: <% form_for @shipper_profile , url: wizard_path, method: :post do |f| %> <div> <%= f.label :company_name, "What is your company name?" %> <%= f.text_field :company_name %> </div> <%= f.submit "Next Step", class: "btn btn-primary" %> <% end %> The user model: class User < ActiveRecord::Base has_one :carrier_profile has_one :shipper_profile end How would I be able to write a generic new and create method to handle creating both profiles? With the current code it is stating that the user_steps controller has no POST method, although if I run rake routes I find that this is untrue.

    Read the article

< Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >