Search Results

Search found 36520 results on 1461 pages for 'default editing language'.

Page 855/1461 | < Previous Page | 851 852 853 854 855 856 857 858 859 860 861 862  | Next Page >

  • How can I check if PHP was compiled with the UNICODE version of the Win32 API?

    - by Wesley Murch
    This is related to this Stack Overflow post: glob() can't find file names with multibyte characters on Windows? I'm having issues with PHP and files that have multibyte characters on Windows. Here's my test case: print_r(scandir('./uploads/')); print_r(glob('./uploads/*')); Correct Output on remote UNIX server: Array ( [0] => . [1] => .. [2] => filename-äöü.jpg [3] => filename.jpg [4] => test?test.jpg [5] => ??? ?????.jpg [6] => ?????????.jpg [7] => ???.jpg ) Array ( [0] => ./uploads/filename-äöü.jpg [1] => ./uploads/filename.jpg [2] => ./uploads/test?test.jpg [3] => ./uploads/??? ?????.jpg [4] => ./uploads/?????????.jpg [5] => ./uploads/???.jpg ) Incorrect Output locally on Windows: Array ( [0] => . [1] => .. [2] => ??? ?????.jpg [3] => ???.jpg [4] => ?????????.jpg [5] => filename-äöü.jpg [6] => filename.jpg [7] => test?test.jpg ) Array ( [0] => ./uploads/filename-äöü.jpg [1] => ./uploads/filename.jpg ) Here's a relevant excerpt from the answer I chose to accept (which actually is a quote from an article that was posted online over 2 years ago): From the comments on this article: http://www.rooftopsolutions.nl/blog/filesystem-encoding-and-php The output from your PHP installation on Windows is easy to explain : you installed the wrong version of PHP, and used a version not compiled to use the Unicode version of the Win32 API. For this reason, the filesystem calls used by PHP will use the legacy "ANSI" API and so the C/C++ libraries linked with this version of PHP will first try to convert yout UTF-8-encoded PHP string into the local "ANSI" codepage selected in the running environment (see the CHCP command before starting PHP from a command line window) Your version of Windows is MOST PROBABLY NOT responsible of this weird thing. Actually, this is YOUR version of PHP which is not compiled correctly, and that uses the legacy ANSI version of the Win32 API (for compatibility with the legacy 16-bit versions of Windows 95/98 whose filesystem support in the kernel actually had no direct support for Unicode, but used an internal conversion layer to convert Unicode to the local ANSI codepage before using the actual ANSI version of the API). Recompile PHP using the compiler option to use the UNICODE version of the Win32 API (which should be the default today, and anyway always the default for PHP installed on a server that will NEVER be Windows 95 or Windows 98...) I can't confirm whether this is my problem or not. I used phpinfo() and did not find anything interesting, but I wasn't sure what to look for. I've been using XAMPP for easy installations, so I'm really not sure exactly how it was installed. I'm using Windows 7, 64 bit - so forgive my ignorance, but I'm not even sure if "Win32" is relevant here. How can I check if my current version of PHP was compiled with the configuration mentioned above? PHP Version: 5.3.8 System: Windows NT WES-PC 6.1 build 7601 (Windows 7 Home Premium Edition Service Pack 1) i586 Build Date: Aug 23 2011 11:47:20 Compiler: MSVC9 (Visual C++ 2008) Architecture: x86 Configure Command: cscript /nologo configure.js "--enable-snapshot-build" "--disable-isapi" "--enable-debug-pack" "--disable-isapi" "--without-mssql" "--without-pdo-mssql" "--without-pi3web" "--with-pdo-oci=D:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=D:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8-11g=D:\php-sdk\oracle\instantclient11\sdk,shared" "--enable-object-out-dir=../obj/" "--enable-com-dotnet" "--with-mcrypt=static" "--disable-static-analyze"

    Read the article

  • Bind9 Debian Not responding

    - by Marc
    Im trying to set up a webserver with Bind9, apache2 on Debian 6. I am trying to learn to do it manualy so I do not have any control panels or anything just the command line. I have a domain name lets call it www.example.com I want a virtual host setup so that I can have multiple websites with different names on my server. I have ns1.example.com and ns2.example.com registered at my servers IP (123.456.789.12). Below is my Bind9 named.conf.options options { directory "/var/cache/bind"; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; This is the default I'm not sure if i was supposed to edit it. I didn't. Here is my named.conf.default-zones: // prime the server with knowledge of the root servers zone "." { type hint; file "/etc/bind/db.root"; }; // be authoritative for the localhost forward and reverse zones, and for // broadcast zones as per RFC 1912 zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; zone "example.com.com" { type master; file "etc/bind/example.com.db"; }; named.conf.local Is an empty file with a comment saying to do local configuration here. example.com.db looks like this: ; BIND data file for mywebsite.com ; $ORIGIN example.com. $TTL 604800 @ IN SOA ns1.example.com. [email protected]. ( 2009120101 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; IN NS ns1.example.com. IN NS ns2.example.com. IN MX 10 mail.example.com. localhost IN A 127.0.0.1 example.com. IN A 123.456.789.12 ns1 IN A 123.456.789.12 ns2 IN A 123.456.789.12 www IN A 123.456.789.12 ftp IN A 123.456.789.12 mail IN A 123.456.789.12 boards IN CNAME www These are all settings I've found from various tutorials. Now when i go to intodns I get: You should already know that your NS records at your nameservers are missing, so here it is again: ns1.example.com ns2.example.com Can someone help me? I'm not sure what Im doing wrong.

    Read the article

  • How to use a different Ethernet connection

    - by SteveC
    I'm running a virtual machine at home which has a VPN connection to our main office, but I also want to connect to a share on another machine at home. When I check with IPCONFIG I can see two ethernet connections ... my work VPN ... Ethernet adapter Local Area Connection* 11: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : xxxx::xxxx:xxxx:xxxx:xxxxxxx IPv4 Address. . . . . . . . . . . : XXX.XXX.XXX.XXX Subnet Mask . . . . . . . . . . . : 255.255.254.0 Default Gateway . . . . . . . . . : XXX.XXX.XXX.XXX and home local ... Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : lan Link-local IPv6 Address . . . . . : xxxx::xxxx:xxxx:xxxx:xxxxxxx IPv4 Address. . . . . . . . . . . : 192.168.1.70 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : What's weird is when I've been working before with a plugged-in ethernet cable I've not had any problem getting to the share? I can PING the other machine, but I can't access the share at ... \\othermachine\c$ I tried 'TRACERT` but that disappears off to the work network and eventually gets back to the local other machine after a few time-outs Is there anyway to "force" the connection to stay local ? UPDATE: the VPN is AEP SSL Tunnel

    Read the article

  • Linux: Apple Wireless A1314 Fn key not registered, looks like software bug

    - by ramplank
    I'm trying to set up my Apple Wireless Keyboard with my Kubuntu systems. These are PC hardware powered by Intel Atom and Intel i5 respectively. The keyboard has a US keyboard layout and has model number A1314 written on the back. It takes two AA batteries. I'm saying that because it appears there are multiple types of model A1314. I have tried this on a 10.04, 11.04, 11.10 and 12.04 system with no success. Every time using a bluetooth dongle and the KDE bluetooth notification tray applet, the keyboard can be connected. In both cases it shows up as "Apple Wireless Keyboard". Almost everything works as expected, in fact, I'm typing on it right now. But one thing doesn't: The Fn key. I'd like to use Fn + Down Arrow as PgDn / Page Down, I understand this is default behaviour on Apple keyboards. And of course I'd like the same for Page Up, Home and End. I'll stick to Page Down in my example. I used the xev tool to see the keycodes the system receives, and if I press on Fn nothing happens, and nothing is registered. If I press Fn + Down Arrow, xev only registers the down arrow. Here's the output from my 11.04 system to illustrate: Press just the Fn key: no output Press Down Arrow key: KeyPress event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2699773, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2699860, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False Press Fn+Down Arrow Keys together: KeyPress event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2701548, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2701623, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False I've been searching this forum and other Linux-related forums for hours but I still have not found a solution. I mostly found advice on how to fix this when using an actual apple laptop or desktop, but I don't have that. They said to try something like the following echo 2 > /sys/module/hid_apple/ ... But since there's no hid_apple directory present on my systems, I've needed to modprobe hid_apple first. That didn't help either. I'm cool with changing some config files, or compiling my own patched kernel if that's necessary. I currently have a 10.04 and 12.04 system available to test. The same issue occurs when hooked up to Windows 7. Fn key still does nothing, not by itself or in combination with other keys. With some AutoHotkey fiddling, I was able to confirm the key is registered as pressed, but ignored by default. A custom AutoHotkey script can fix that. But AutoHotkey is only for Windows, I want my problem fixed on Linux. Hooked up to an iPad 2 it only works in combination with the F1-F12 keys. Not with the arrow keys. If the screen of the ipad is off, and I press just the Fn key, the screen will come on, so the key itself is registered as pressed. So to sum up my question: Can anyone help me get Page Up, Page Down, Home and End to work on this keyboard, when that requires me to use an Fn key which is currently not registered?

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • duplicate cache pages: Varnish

    - by Sukhjinder Singh
    Recently we have configured Varnish on our server, it was successfully setup but we noticed that if we open any page in multiple browsers, the Varnish send request to Apache not matter page is cached or not. If we refresh twice on each browser it creates duplicate copies of the same page. What exactly should happen: If any page is cached by Varnish, the subsequent request should be served from Varnish itself when we are opening the same page in browser OR we are opening that page from different IP address. Following is my default.vcl file backend default { .host = "127.0.0.1"; .port = "80"; } sub vcl_recv { if( req.url ~ "^/search/.*$") { }else { set req.url = regsub(req.url, "\?.*", ""); } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (!req.backend.healthy) { unset req.http.Cookie; } set req.grace = 6h; if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } if (req.http.Cookie) { set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { unset req.http.Cookie; } else { return (pass); } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") {return(pipe);} /* Non-RFC2616 or CONNECT which is weird. */ if (req.request != "GET" && req.request != "HEAD") { return (pass); } if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unknown algorithm remove req.http.Accept-Encoding; } } return (lookup); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } sub vcl_fetch { if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } set beresp.grace = 6h; } sub vcl_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return (hash); } sub vcl_pipe { set req.http.connection = "close"; } sub vcl_hit { if (req.request == "PURGE") {ban_url(req.url); error 200 "Purged";} if (!obj.ttl > 0s) {return(pass);} } sub vcl_miss { if (req.request == "PURGE") {error 200 "Not in cache";} }

    Read the article

  • Apache SSL reverse proxy to a Embed Tomcat

    - by ggarcia24
    I'm trying to put in place a reverse proxy for an application that is running a tomcat embed server over SSL. The application needs to run over SSL on the port 9002 so I have no way of "disabling SSL" for this app. The current setup schema looks like this: [192.168.0.10:443 - Apache with mod_proxy] --> [192.168.0.10:9002 - Tomcat App] After googling on how to make such a setup (and testing) I came across this: https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/861137 Which lead to make my current configuration (to try to emulate the --secure-protocol=sslv3 option of wget) /etc/apache2/sites/enabled/default-ssl: <VirtualHost _default_:443> SSLEngine On SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key SSLProxyEngine On SSLProxyProtocol SSLv3 SSLProxyCipherSuite SSLv3 ProxyPass /test/ https://192.168.0.10:9002/ ProxyPassReverse /test/ https://192.168.0.10:9002/ LogLevel debug ErrorLog /var/log/apache2/error-ssl.log CustomLog /var/log/apache2/access-ssl.log combined </VirtualHost> The thing is that the error log is showing error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported protocol Complete request log: [Wed Mar 13 20:05:57 2013] [debug] mod_proxy.c(1020): Running scheme https handler (attempt 0) [Wed Mar 13 20:05:57 2013] [debug] mod_proxy_http.c(1973): proxy: HTTP: serving URL https://192.168.0.10:9002/ [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2011): proxy: HTTPS: has acquired connection for (192.168.0.10) [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2067): proxy: connecting https://192.168.0.10:9002/ to 192.168.0.10:9002 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2193): proxy: connected / to 192.168.0.10:9002 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2444): proxy: HTTPS: fam 2 socket created to connect to 192.168.0.10 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2576): proxy: HTTPS: connection complete to 192.168.0.10:9002 (192.168.0.10) [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection to child 0 established (server demo1agrubu01.demo.lab:443) [Wed Mar 13 20:05:57 2013] [info] Seeding PRNG with 656 bytes of entropy [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1866): OpenSSL: Handshake: start [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: before/connect initialization [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: unknown state [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1897): OpenSSL: read 7/7 bytes from BIO#7f122800a100 [mem: 7f1230018f60] (BIO dump follows) [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1830): +-------------------------------------------------------------------------+ [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1869): | 0000: 15 03 01 00 02 02 50 ......P | [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1875): +-------------------------------------------------------------------------+ [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1903): OpenSSL: Exit: error in unknown state [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] SSL Proxy connect failed [Wed Mar 13 20:05:57 2013] [info] SSL Library Error: 336032002 error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported protocol [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection closed to child 0 with abortive shutdown (server example1.domain.tld:443) [Wed Mar 13 20:05:57 2013] [error] (502)Unknown error 502: proxy: pass request body failed to 172.31.4.13:9002 (192.168.0.10) [Wed Mar 13 20:05:57 2013] [error] [client 192.168.0.10] proxy: Error during SSL Handshake with remote server returned by /dsfe/ [Wed Mar 13 20:05:57 2013] [error] proxy: pass request body failed to 192.168.0.10:9002 (172.31.4.13) from 172.31.4.13 () [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2029): proxy: HTTPS: has released connection for (172.31.4.13) [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1884): OpenSSL: Write: SSL negotiation finished successfully [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection closed to child 6 with standard shutdown (server example1.domain.tld:443) If I do a wget --secure-protocol=sslv3 --no-check-certificate https://192.168.0.10:9002/ it works perfectly, but from apache is not working. I'm on an Ubuntu Server with the latest updates running apache2 with mod_proxy and mod_ssl enabled: ~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" ~# dpkg -s apache2 ... Version: 2.2.22-1ubuntu1.2 ... ~# dpkg -s openssl ... Version: 1.0.1-4ubuntu5.7 ... Hope that anyone may help

    Read the article

  • In Linux, what's the best way to delegate administration responsibilities, like for Apache, a database, or some other application?

    - by Andrew Banks
    In Linux, what's the best way to delegate administration responsibilities for Apache and other "applications"? File permissions? Sudo? A mix of both? Something else? At work we have two tiers of "administrators" Operating system administrators. These are your run-of-the-mill "server administrators." They are responsible for just the operating system. Application administrators. The people who build the web site. This includes not only writing the SQL, PHP, and HTML, but also setting up and running Apache and PostgreSQL or MySQL. The aforementioned OS admins will install this stuff, but it's mainly up to the app admins to edit all the config files, start and stop processes when needed, and so on. I am one of the app admins. This is different than what I am used to. I used to just write code. The sysadmin took care not only of the OS but also installing, setting up, and keeping up the server software. But he left. Now I'm in charge of setting up Apache and the database. The new sysadmins say they just handle the operating system. It's no problem. I welcome learning new stuff. But there is a learning curve, even for the OS admins. Apache, by default, seems to be set up for administration by root directly. All the config files and scripts are 644 and owned by root:root. I'm not given the root password, naturally, so the OS admins must somehow give my ordinary OS user account all the rights necessary to edit Apache's config files, start and stop it, read its log files, and so on. Right now they're using a mix of: (1) giving me certain sudo rights, (2) adding me to certain groups, and (3) changing the file permissions of various directories, to make them writable by one of the groups I'm in. This never goes smoothly. There's always a back-and-forth between me and the sysadmins. They say it's ready. Then I try certain things, and half of them I still can't do. So they make some more changes. Then finally I seem to be independent and can administer Apache and the database without pestering them anymore. It's the sheer complication and amount of changes that make me uncomfortable. Even though it finally works, more or less, it seems hackneyed. I feel like we're doing it wrong. It seems like the makers of the software would have anticipated this scenario (someone other than root administering it) and have a clean two- or three-step program to delegate responsibility to me. But it feels like we are really chewing up the filesystem and making it far and away from the default set-up. Any suggestions? Are we doing it the recommended way? P.S. For PostgreSQL it seems a little better. Its files are owned by a system user named postgres. So giving me the right to run sudo su - postgres gives me just about everything. I'm just now getting into MySQL, but it seems to be set up similarly. But it seems a little weird doing all my work as another user.

    Read the article

  • Hosed Windows 7 permissons

    - by Anthony
    Here is the most interesting thing I've noticed since the problems started: If I go into a control panel/system module (in this case the Resource Monitor) that has a "Check Online" type option, Firefox (my default browser) opens right up without a problem. But if I just start Firefox from any shortcuts (start menu, desktop, etc), the Firefox process starts up (and the start menu icon starts glowing) only to end without notice a few seconds later. Possibly related: If I start up in Safe-Mode (w/o Networking, but haven't tried with yet), I can start up FF or Chrome just fine, but if I attempt to open Chrome normally, I get a permissions error. Opera and Safari seem to be okay (mostly). Safari crashes when I try to download any files. All of the above leads me to believe that some (but clearly not all) core files have messed up permissions. Or rather, that I no longer have permission. System still does, based on Firefox opening without fail when the system initiates it. I've run MS Forefront once in normal mode, Malwarebytes twice in normal mode and once in safe-mode. One trojan found and deleted, but the problem persists. Two other things worth mentioning: I accidentally duplicated my library... I thought I'd try to add the "Internet" folder to my start menu, next to music and downloads. The first advanced thing I tried was "create new library". I clearly misunderstood what this means. I thought it was a way to add virtual folders to the library (which I thought, in turn, would allow me to choose it as a link on the start menu), but instead it recreated my already existing user folder, AppData and all. I didn't notice this until today. Then I tried setting permissions for my User folder to full control, recursively... Confused but not giving up,I thought I could maybe create a shortcut to the NetHood folder manually, but instead got hit with an access denied error. So I tried to change the permission levels for all sub-folders to my user folder so that I had full control. I got several access denied errors along the way. At this point I gave up, went out, ended up caught in the rain and stuck on a friend's couch and showing up late for work the next day. Thanks for nothing, Microsoft. When I finally got home today (20 hours later), I noticed that Firefox was acting really strange. I tried opening Chrome to see if the problem was client side or server side, and instead got the above-mentioned "you don't have permission to open this program" alert. And I think that's the whole story. Oh, I also did a system restore, but not chose a point from this morning (an auto update), and it worked but the problem wasn't fixed. And then all the earlier restore points were gone. So the questions are: a) is there a way to set the admin and user privs back to "default"? b) would this, in anyone's expert opinion, fix the problems I'm having? c) how come being logged in as an admin isn't the same as being logged in with admin privs? It seems that half the time I have to do run as admin for fairy standard things because i'm being treated as me-theuser and not me-theadmin. Thanks for reading.

    Read the article

  • PHP, Apache and curl: Differences between Windows and Linux?

    - by beginner_
    I'm trying to run my php App on Ubuntu Server 11.10. This App works fine under Apache + PHP in windows. I have other applications that I can simply copy&paste between the 2 OS and they work on both. (These don't use cURL). However this one uses the php library tonic (RESTful webservices) and makes us of php cURL module. The issue is I'm not getting an error message which makes it impossible to find the issue. I (must) use NTLM authentication and this is done with AuthenNTLM Apache Module: Order allow,deny Allow from all PerlAuthenHandler Apache2::AuthenNTLM AuthType ntlm AuthName "Protected Access" require valid-user PerlAddVar ntdomain "domainName server" PerlSetVar defaultdomain domainName PerlSetVar ntlmsemtimeout 2 PerlSetVar ntlmdebug 1 PerlSetVar splitdomainprefix 0 All files that cURL needs to fetch override AuthenNTLM authentication: order deny,allow deny from all allow from 127.0.0.1 Satisfy any Since these files are only fectehd by cURL from same server, access can be limited to localhost. Possible issues are: NTLM auth isn't overridden for files requested through cURL (even though AllowOverride All is set) curl works differently on linux $ch = curl_init(); curl_setopt($ch, CURLOPT_COOKIE, $strCookie); curl_setopt($ch, CURLOPT_URL, $baseUrl . $queryString); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $html = curl_exec($ch); curl_close($ch); other? Apache log says: [error] Bad/Missing NTLM/Basic Authorization Header for /myApp/webservice/local/viewList.php But this directory should override NTLM authentication using curl command line from windows to access same resource i get: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html> <head> <title>406 Not Acceptable</title> </head> <body> <h1>Not Acceptable</h1> <p>An appropriate representation of the requested resource /myApp/webservice/myResource could not be found on this server.</p> Available variants: <ul> <li><a href="myResource.php">myResource.php</a> , type application/x-httpd-php</li> </ul> <hr> <address>Apache/2.2.20 (Ubuntu) Server at localhost Port 80</address> </body> </html> Note: This is duplicate from http://stackoverflow.com/questions/9821979/php-curl-on-linux-what-is-the-difference-to-curl-on-windows Is it was suggested I post it here. EDIT: Please see Ubuntu Server: Apache2 seems to attach .php to URI as I discovered why it does not work but need help so the issue does not occur anymore. ANSWER: The issue is the default Apache configuration on Ubuntu: Options Indexes FollowSymLinks MultiViews MultiViews is changing request_uri from myResource to myResource.php. Solutions: disable MultiViews in .htaccess: Options -MultiViews remove MultiViews from default config rename the file as example to myResourceClass I chose last option because that should work regardless of configuration and I only have 3 such files so the change took about 30 secs...

    Read the article

  • Set source address to use tun device does not work (Debian Squeeze)

    - by A. Donda
    there have been similar questions on StackExchange but none of the answers helped me, so I'll try a question of my own. I have a VPN connection via OpenVPN. By default, all traffic is redirected through the tunnel using OpenVPN's "two more specific routes" trick, but I disabled that. My routing table is like this: 198.144.156.141 192.168.2.1 255.255.255.255 UGH 0 0 0 eth0 10.30.92.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun1 10.30.92.1 10.30.92.5 255.255.255.255 UGH 0 0 0 tun1 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 10.30.92.5 0.0.0.0 UG 0 0 0 tun1 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 eth0 And the interface configuration is like this: # ifconfig eth0 Link encap:Ethernet HWaddr XX-XX- inet addr:192.168.2.100 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:9ff:fe8d:acbd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:394869 errors:0 dropped:0 overruns:0 frame:0 TX packets:293489 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:388519578 (370.5 MiB) TX bytes:148817487 (141.9 MiB) Interrupt:20 Base address:0x6f00 tun1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.30.92.6 P-t-P:10.30.92.5 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:64 errors:0 dropped:0 overruns:0 frame:0 TX packets:67 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:9885 (9.6 KiB) TX bytes:4380 (4.2 KiB) plus the lo device. The routing table has two default routes, one via eth0 through my local network router (DSL modem) at 192.168.2.1, and another via tun1 through the VPN's gateway. With this configuration, if I connect to a site, the route chosen is the direct one (because it has less hops?): # traceroute 8.8.8.8 -n traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 192.168.2.1 0.427 ms 0.491 ms 0.610 ms 2 213.191.89.13 17.981 ms 20.137 ms 22.141 ms 3 62.109.108.48 23.681 ms 25.009 ms 26.401 ms ... This is fine, because my goal is to send only traffic from specific applications through the tunnel (esp. transmission, using its -i / bind-address-ipv4 option). To test whether this can work at all, I check it first with traceroute's -s option: # traceroute 8.8.8.8 -n -s 10.30.92.6 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 * * * 2 * * * 3 * * * ... This I take to mean that connection using the tunnel's local address as source is not possible. What is possible (though only as root) is to specify the source interface: # traceroute 8.8.8.8 -n -i tun1 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 10.30.92.1 129.337 ms 297.758 ms 297.725 ms 2 * * * 3 198.144.152.17 297.653 ms 297.652 ms 297.650 ms ... So apparently the tun1 interface is working and it is possible to send packets through it. But selecting the source interface is not implemented in my actual target application (transmission), so I would like to get source address selection to work. What am I doing wrong?

    Read the article

  • C# WebBrowser.ShowPrintDialog() not showing

    - by jeah_wicer
    I have this peculiar problem while wanting to print a html-report. The file itself is a normal local html file, located on my hard drive. To do this, I have tried the following: public static void PrintReport(string path) { WebBrowser wb = new WebBrowser(); wb.Navigate(path); wb.ShowPrintDialog() } And I have this form with a button with the click event: private void button1_Click(object sender, EventArgs e) { string path = @"D:\MyReport.html"; PrintReport(path); } This does absolutely nothing. Which is kind of strange... but things get stranger... When editing the print function to do the following: public static void PrintReport(string path) { WebBrowser wb = new WebBrowser(); wb.Navigate(path); MessageBox.Show("TEST"); wb.ShowPrintDialog() } It works. Yes, only adding a MessageBox. The MessageBox is showing and after it comes the print dialog. I have also tried with Thread.Sleep(1000) instead, which doesn't work. Can anyone explain to me what's going on here? Why would a messagebox make any difference? Can it be some kind of permission problem? I've reproduced this on both Windows 7 and 8, same thing. I made this small application with only the above code to isolate the problem. I am quite sure it works on windows XP though, since an older version of the application I'm working on runs on it. When trying to do this directly with the mshtml-dll instead I also get problems. Any input or clarification is greatly appreciated!

    Read the article

  • How to retrieve value from DropDownListFor html helper in ASP>NET MVC2?

    - by Eedoh
    Hello I know there was few similar questions here about DropDownListFor, but neither helped me... I use Entity Framework as ORM in my project. There's EF model called "Stete". Stete has Foreign on EF model called "Drustva" Now I'm trying to make a form for editing the data, for Stete model. I managed to display everything, including Stete.Drustva.Naziv property, but I can't get this last property in my handler method [HttpPost]. It always return 0, no matter what I select in drop down list. Here's the code: DrustvaController: public static IEnumerable<SelectListItem> DrustvaToSelectListItemsById(this KnjigaStetnikaEntities pEntities, int Id) { IEnumerable<Drustva> drustva = (from d in pEntities.Drustva select d).ToList(); return drustva.OrderBy(drustvo => drustvo.Naziv).Select(drustvo => new SelectListItem { Text = drustvo.Naziv, Value = drustvo.Id.ToString(), Selected = (drustvo.Id == Id)? true : false }); } SteteController: private IEnumerable<SelectListItem> privremenaListaDrustava(int Id) { using (var ctx = new KnjigaStetnikaEntities()) { return ctx.DrustvaToSelectListItemsById(Id); } } public ActionResult IzmijeniPodatkeStete(Int32 pBrojStete) { PretragaStetaModel psm = new PretragaStetaModel(); ViewData["drustva"] = privremenaListaDrustava(psm.VratiStetuPoBrojuStete(pBrojStete).Drustva.Id); ViewData.Model = new Models.Stete(); return View("EditView", (Stete.Models.Stete)psm.GetSteta(pBrojStete)); } EditView: <div class="editor-label"> <%: Html.Label("Društvo") %> </div> <div class="editor-field"> <%: Html.DropDownListFor(m => m.Drustva.Naziv, ViewData["drustva"] as IEnumerable<SelectListItem>) %> <%: Html.ValidationMessageFor(model => model.FKDrustvo) %> </div> I am sorry for not translating names of the objects into english, but they hardly have appropriate translation. If necessary, I can try creating similar example...

    Read the article

  • Working with Subversion the same as with Visual Source Safe in Visual Studio

    - by Para
    Hi, At work I just started using subversion with ankhsvn instead of visual source safe. I managed to integrate it well enough but it doesn't seem the same. Using VSS the following would happen: A user check out a file by right clicking and selecting "check out" or by editing it. If another user tried to modify the same file he would get an error. No 2 users could edit the same file at the same time. No fancy merging. No conflicts and no conflict resolutions. I understand the the philosophy behind subversion is different but is there any way that this behavior described above could be duplicated with subversion? There is an option in ankhSVN called "Automatically lock files on change..." but even if I activate this option when I edit a file it never gets automatically locked. Even if this option worked the other users wouldn't see the lock until they comited the file. They wouldn't get an error when they tried to edit it like they would in Visual Source Safe. So basically: can Visual Source Safe's behavior be duplicated using Subversion and ankhSVN?

    Read the article

  • Android ListView item highlight programmatically

    - by dhomes
    Hi all, I know it's been asked around but I haven't found quite the answer yet (new to Android so sorry about the ignorance) my app's launch Activity has an EditText (called searchbox) , a ListView (designations) and a Spinner (types) I use the EditText as a searchbox, I have to pass the string through a custom editing to make searching more flexible. After that I match that against the closest approximation I find in 'designations' and do designations.setSelection(j); As expected, this sets the desired item to the top of designations. But I can find a way to highlight it via code. NOW, i do know that if the device is in touch mode the highlighting of a selected item won't occur. So the last 4 lines of my searchbox's onTextChanged event are: designations.setFocusable(true); designations.setFocusableInTouchMode(true); if (match==true) designations.setSelection(j); if (st.length()==0) designations.setSelection(0); to no avail. now, i don't have any code on searchbox's afterTextChanged(Editable s); so could anyone give me a clue on this? regards ~dh

    Read the article

  • Add a row to UITableView for adding new item?

    - by David.Chu.ca
    In order to provide UI for user to add new items to my table view, I would like to add a new row in my table at a specified location (last row for example) when the view is in edit mode (I have a edit button on the view's navigation bar right side). This new row will have a add button indicator on the left side and disclosure accessory arrow on the right. When the view is not in edit mode, this add row should not be displayed. I am not sure if I should overwrite: - (void)setEditing:(BOOL)editing animated:(BOOL)animated{...} where I call the UITableView's method: insertRowsAtIndexPaths:(NSArray *)indexPaths withRowAnimation: (UITableViewRowAnimation)animation to insert a new row? My understanding is that this call may add a new row into the table view. The table view's data source is from CoreData storage. Not sure this may cause inconsistent numbers of data in the data store and table view? If it is OK and I have to manage rows in the table view, how can I add left add indicator and left disclosure arrow to the new row? Another question is that if I can do it to insert a new row as Add row, should I remove it when the table view not in edit mode? Just want to know if I am on the right track.

    Read the article

  • TFS Build errors TF224003, TF215085, TF215076

    - by iamdudley
    Hi, I am using TFS2008 and VS2008. I run nightly builds for about 20 applications using one build agent and the builds are scheduled for either 1am or 2am. Most of the build succeed, however 6 of them fail regularly with similar errors. The errors are either the first two below, or the third one by itself: TF215085: An error occurred while connecting to agent \xxxx\BUILDMACHINE: TF215076: Team Foundation Build on computer BUILDMACHINE (port 9191) is not responding. (Detail Message: The request was aborted: The operation has timed out.) 11/04/2010 2:10:10 AM TF224003: An exception occurred on the build computer BUILDMACHINE: The build (vstfs:///Build/Build/2632) has already completed and cannot be started again.. TF215085: An error occurred while connecting to agent \yyyyy\BA_WKSTFSBUILD: Team Foundation services are not available from server srvtfs. Technical information (for administrator): The operation has timed out It looks to me like some kind of communication error, maybe the port gets over loaded - can this happen? Should I spread the builds out a bit more? In the build definition it says "Queue the build on the default build agent at", so I figured if I scheduled them to start at the same time they would be queued and occur sequentially. Most of the suggestions I've found online for these errors are for all or nothing scenarios where no builds work at all whereas my problem is most build but some consistently do not. Judging by the dates of the last successful builds of these 6 failing builds I believe it is the same 6 failing every night. (I'm editing the build definitions now to keep the failed builds so I can get some more info on the problem) Any help on this would be much appreciated. James.

    Read the article

  • Invalid byte 1 of 1-byte UTF-8 sequence

    - by user275886
    I have a MyFaces Facelets application, where the page coding is a bit rugged. Anyway, it's developed with Eclipse and built with Ant, and kindof runs ok in Tomcat 2.0.26. So far so good. Now, I'd rather build with Maven, so I made a couple of pom-files, opened them in Netbeans and built, and now I have a war file that deploys ok. However, on any facelet page it barfs out with com.sun.org.apache.xerces.internal.impl.io.MalformedByteSequenceException: Invalid byte 1 of 1-byte UTF-8 sequence. at com.sun.org.apache.xerces.internal.impl.io.UTF8Reader.invalidByte(UTF8Reader.java:684) at com.sun.org.apache.xerces.internal.impl.io.UTF8Reader.read(UTF8Reader.java:554) at com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.load(XMLEntityScanner.java:1742) So, I've tried a lot of different things, and the application actually run simple pages without facelet stuff. But, everything runs if I just build with Ant instead ... So my question is: What's the most likely difference between an ant build and a maven build that may cause this? It also seems that even though I've configured for UTF-8 in Netbeans and pom-files, Netbeans eventually ends up reporting the facelet files as ISO-8859-1 after some editing. I've made sure that most central libs are of same version (especially xerces 2.3.0), I've added an encoding servlet filter that had no effect. And, I'd rather fix the maven build and keep the buggy pages, than the other way around ... it's my intention to introduce Naven, not fix buggy pages.

    Read the article

  • POST from edit/create partial views loaded into Twitter Bootstrap modal

    - by mare
    I'm struggling with AJAX POST from the form that was loaded into Twitter Bootstrap modal dialog. Partial view form goes like this: @using (Html.BeginForm()) { // fields // ... // submit <input type="submit" value="@ButtonsRes.button_save" /> } Now this is being used in non AJAX editing with classic postbacks. Is it possible to use the same partial for AJAX functionality? Or should I abstract away the inputs into it's own partial view? Like this: @using (Ajax.BeginForm()) { @Html.Partial("~/Views/Shared/ImageEditInputs.cshtml") // but what to do with this one then? <input type="submit" value="@ButtonsRes.button_save" /> } I know how to load this into Bootstrap modal but few changes should be done on the fly: the buttons in Bootstrap modal should be placed in a special container (the modal footer), the AJAX POST should be done when clicking Save which would first, validate the form and keep the modal opened if not valid (display the errors of course) second, post and close the modal if everything went fine in the view that opened the modal, display some feedback information at the top that save was succesful. I'm mostly struggling where to put what JS code. So far I have this within the List view, which wires up the modals: $(document).ready(function () { $('.openModalDialog').click(function (event) { event.preventDefault(); var url = $(this).attr('href'); $.get(url, function (data) { $('#modalContent').html(data); $('#modal').modal('show'); }); }); }); The above code, however, doesn't take into the account the special Bootstrap modal content placeholder (header, content, footer). Is it possible to achieve what I want without having multiple partial views with the same inputs but different @using and without having to do hacks with moving the Submit button around?

    Read the article

  • What's the best way to refresh a UITableView within a UINavigationController hierarchy

    - by Steve Neal
    Hi, I'm pretty new to iPhone development and have struggled to find what I consider to be a neat way around this problem. I have a user interface where a summary of record data is displayed in a table inside a navigation controller. When the user clicks the accessory button for a row, a new view is pushed onto the navigation controller revealing a view where the user can edit the data in the corresponding record. Once done, the editing view is popped from the navigation controller's stack and the user is returned to the table view. My problem is that when the user returns to the table view, the table still shows the state of the data before the record was edited. I must therefore reload the table data to show the changes. It doesn't seem possible to reload the table data before it is displayed as the call only updates displayed records. Reloading it after the table has been displayed results in the old data changing before the user's eyes, which I'm not too happy with. This seems to me like a pretty normal thing to want to do in an iPhone app. Can anyone please suggest the best practice approach to doing this? I feel like I'm missing something. Cheers - Steve.

    Read the article

  • How to retrieve value from DropDownListFor html helper in ASP.NET MVC2?

    - by Eedoh
    Hello I know there was few similar questions here about DropDownListFor, but neither helped me... I use Entity Framework as ORM in my project. There's EF model called "Stete". Stete has Foreign on EF model called "Drustva" Now I'm trying to make a form for editing the data, for Stete model. I managed to display everything, including Stete.Drustva.Naziv property, but I can't get this last property in my handler method [HttpPost]. It always return 0, no matter what I select in drop down list. Here's the code: DrustvaController: public static IEnumerable<SelectListItem> DrustvaToSelectListItemsById(this KnjigaStetnikaEntities pEntities, int Id) { IEnumerable<Drustva> drustva = (from d in pEntities.Drustva select d).ToList(); return drustva.OrderBy(drustvo => drustvo.Naziv).Select(drustvo => new SelectListItem { Text = drustvo.Naziv, Value = drustvo.Id.ToString(), Selected = (drustvo.Id == Id)? true : false }); } SteteController: private IEnumerable<SelectListItem> privremenaListaDrustava(int Id) { using (var ctx = new KnjigaStetnikaEntities()) { return ctx.DrustvaToSelectListItemsById(Id); } } public ActionResult IzmijeniPodatkeStete(Int32 pBrojStete) { PretragaStetaModel psm = new PretragaStetaModel(); ViewData["drustva"] = privremenaListaDrustava(psm.VratiStetuPoBrojuStete(pBrojStete).Drustva.Id); ViewData.Model = new Models.Stete(); return View("EditView", (Stete.Models.Stete)psm.GetSteta(pBrojStete)); } EditView: <div class="editor-label"> <%: Html.Label("Društvo") %> </div> <div class="editor-field"> <%: Html.DropDownListFor(m => m.Drustva.Naziv, ViewData["drustva"] as IEnumerable<SelectListItem>) %> <%: Html.ValidationMessageFor(model => model.FKDrustvo) %> </div> I am sorry for not translating names of the objects into english, but they hardly have appropriate translation. If necessary, I can try creating similar example...

    Read the article

  • EDM -> POCO -> WCF (.NET4) But transferring Collections causes IsReadOnly set to TRUE

    - by Gary B
    Ok, this may sound a little 'unorthodox', but...using VS2010 and the new POCO t4 template for Entity Framework (http://tinyurl.com/y8wnkt2), I can generate nice POCO's. I can then use these POCO's (as DTO's) in a WCF service essentially going from EDM all the way through to the client. Kinda what this guys is doing (http://tinyurl.com/yb4bslv), except everything is generated automatically. I understand that an entity and a DTO 'should' be different, but in this case, I'm handling client and server, and there's some real advantages to having the DTO in the model and automatically generated. My problem is, that when I transfer an entity that has a relationship, the client generated collection (ICollection) has the read-only value set, so I can't manipulate that relationship. For example, retrieving an existing Order, I can't add a product to the Products collection client-side...the Products collection is read-only. I would prefer to do a bunch of client side 'order-editing' and then send the updated order back rather than making dozens of server round trips (eg AddProductToOrder(product)). I'd also prefer not to have a bunch of thunking between Entity and DTO. So all-in-all this looks good to me...except for the read-only part. Is there a solution, or is this too much against the SOA grain?

    Read the article

  • Silverlight: Problem using CellEditingTemplate

    - by Charles
    Hello Silverlight gurus, I am experiencing a problem with the DataGrid where my data-bound object's properties are not being updated when using the CellTemplate/CellEditingTemplate: <data:DataGridTemplateColumn Header="Text"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Text}" ></TextBlock> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> <data:DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> <TextBox Text="{Binding Text, Mode=TwoWay}" /> </DataTemplate> </data:DataGridTemplateColumn.CellEditingTemplate> </data:DataGridTemplateColumn> I am binding to a code-gen'd entity via the RIA Services. I've added an event handler to the PropertyChanged event, and it is never fired. However, if I do not use a template and instead use a DataGridTextColumn, everything works fine. I'm sure this sounds like an easy fix - I'm only using a TextBox in my editing template, so why not us a DataGridTextColumn? The problem is that I want to have a multi-line textbox, so using the DataGridTextColumn is not an option. Any suggestions? Do you know of any differences between using a CellEditingTemplate containing a single TextBox and using a DataGridTextColumn? Thanks, -Charles [UPDATE] I posted a bug report here: http://silverlight.net/forums/p/118729/267521.aspx I can't imagine that this is "as-designed"... If someone else has known about this and I'm just being dumb, I'd appreciate an explanation - I'd prefer embarrassment over ignorance :).

    Read the article

  • I have made an installer for MyProgram but the uninstall shortcut that it creates leaves behind empt

    - by Coder7862396
    I have created an installer for MyProgram using the Visual Studio Installer (Visual Studio Setup Project). It is called "MyProgram Setup.msi". It installs the program fine and if it is uninstalled using the Add/Remove Programs control panel then everything gets removed as it should. The problem is that I want to add a shortcut to the "User's Programs Menu" under the program shortcut called "Uninstall MyProgram". I have tried doing this in 3 different ways and in all 3 ways if MyProgram is uninstalled using that shortcut, the uninstall will leave behind 2 empty folders ("...Program Files\MyCompany\" and "...Program Files\MyCompany\MyProgram\"). Here are the 3 ways that I have tried to make an uninstaller shortcut: 1) A shortcut to a batch or script file Uninstall MyProgram.bat: @ECHO OFF msiexec /uninstall {MyGUID} Uninstall MyProgram.vbs: Dim objShell Set objShell = WScript.CreateObject("WScript.Shell") objShell.Run("START /B msiexec /uninstall {MyGUID}") Set objShell = Nothing 2) Editing the MSI file using Orca.exe I found out how to do this using this guide: http://www.codeproject.com/KB/install/MSIShortcuts.aspx I added the Shortcut Entry to the Shortcut table. Uninstalling worked but it still left behind the 2 empty folders when using this shortcut. 3) From code within MyProgram.exe I modified MyProgram.exe to take a "/uninstall" command line parameter to run "msiexec.exe /uninstall {MyGUID}" and exit itself. Similar to this solution: http://www.codeproject.com/KB/install/DeployUninstall.aspx None of these attempts created a shortcut which can uninstall the program as well as the program's base folders. I don't want to switch to some other installer product such as Inno Setup, NSIS, or WiX.

    Read the article

< Previous Page | 851 852 853 854 855 856 857 858 859 860 861 862  | Next Page >