Search Results

Search found 8613 results on 345 pages for 'ssl keys'.

Page 194/345 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Why are my httpd mpm_prefork processes being reaped so quickly?

    - by Dan Pritts
    We've got a system running RHEL6, x64. We are using a local installation of apache 2.2.22 from source. we serve primarily: mod_perl applications (with a local installation of perl 5.16.0) tomcat applications proxied with mod_jk Here is some context; the main question is below. All of this talks to an Oracle backend. We are having issues with Oracle becoming unresponsive. We think this is because we're hitting the maximum process limit in oracle. We've upped the process limit, but now we are hitting memory pressure on the oracle server. We have tons of oracle sessions sitting idle. I can trace a bunch of them back to the httpd processes. We have mod_perl's Apache::DBI start up a new connection to the database with each httpd child that's spawned. We are concerned that these are not always getting closed out properly when the httpd's exit...and the httpd's are exiting very frequently. I know that it would be good to modify the mod_perl applications to use some better form of db connection pooling; we plan to pursue that but would like to solve our immediate problem sooner. So here's the main question. We are using the prefork MPM. The apache child processes are lasting at most a few minutes. Log analysis shows that each one is serving fewer than 50 clients before exiting; the last request each child serves is OPTIONS * HTTP/1.0 on some sort of internal connection; I'm under the impression that this is a "ping" from the master process. I've adjusted the MPM config as follows. I didn't want to raise MinSpareServers too high, because, after all, i'm trying to minimize the number of sessions to oracle. MinSpareServers 5 MaxSpareServers 30 MaxClients 150 MaxRequestsPerChild 10000 Right now we're serving 250-300 requests per minute. We've got 21 httpd's running, the eldest (other than the master, owned by root) being 3 minutes old. This rate of reaping of the apache children really seems excessive. What could be causing it? Apache was built with: $ ./configure --prefix=/opt/apache --with-ssl=/usr/lib --enable-expires --enable-ext-filter --enable-info --enable-mime-magic --enable-rewrite --enable-so --enable-speling --enable-ssl --enable-usertrack --enable-proxy --enable-headers --enable-log-forensic Apache config info: % /opt/apache/bin/httpd -V Server version: Apache/2.2.22 (Unix) Server built: Jul 23 2012 22:30:13 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.5, APR-Util 1.4.1 Compiled using: APR 1.4.5, APR-Util 1.4.1 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/opt/apache" -D SUEXEC_BIN="/opt/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf" modules are compiled into apache rather than shared libs: % /opt/apache/bin/httpd -l Compiled in modules: core.c mod_authn_file.c mod_authn_default.c mod_authz_host.c mod_authz_groupfile.c mod_authz_user.c mod_authz_default.c mod_auth_basic.c mod_ext_filter.c mod_include.c mod_filter.c mod_log_config.c mod_log_forensic.c mod_env.c mod_mime_magic.c mod_expires.c mod_headers.c mod_usertrack.c mod_setenvif.c mod_version.c mod_proxy.c mod_proxy_connect.c mod_proxy_ftp.c mod_proxy_http.c mod_proxy_scgi.c mod_proxy_ajp.c mod_proxy_balancer.c mod_ssl.c prefork.c http_core.c mod_mime.c mod_status.c mod_autoindex.c mod_asis.c mod_info.c mod_cgi.c mod_negotiation.c mod_dir.c mod_actions.c mod_speling.c mod_userdir.c mod_alias.c mod_rewrite.c mod_so.c One final note - the red hat httpd, apr, and perl packages are all installed, but ldd shows that none of those libraries are linked with the running httpd.

    Read the article

  • How Hacker Can Access VPS CentOS 6 content?

    - by user2118559
    Just want to understand. Please, correct mistakes and write advices Hacker can access to VPS: 1. Through (using) console terminal, for example, using PuTTY. To access, hacker need to know port number, username and password. Port number hacker can know scanning open ports and try to login. The only way to login as I understand need to know username and password. To block (make more difficult) port scanning, need to use iptables configure /etc/sysconfig/iptables. I followed this https://www.digitalocean.com/community/articles/how-to-setup-a-basic-ip-tables-configuration-on-centos-6 tutorial and got *nat :PREROUTING ACCEPT [87:4524] :POSTROUTING ACCEPT [77:4713] :OUTPUT ACCEPT [77:4713] COMMIT *mangle :PREROUTING ACCEPT [2358:200388] :INPUT ACCEPT [2358:200388] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [2638:477779] :POSTROUTING ACCEPT [2638:477779] COMMIT *filter :INPUT DROP [1:40] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [339:56132] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP -A INPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,SYN,RST,PSH,ACK,URG -j DROP -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 110 -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -s 11.111.11.111/32 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 21 -j ACCEPT -A INPUT -s 11.111.11.111/32 -p tcp -m tcp --dport 21 -j ACCEPT COMMIT Regarding ports that need to be opened. If does not use ssl, then seems must leave open port 80 for website. Then for ssh (default 22) and for ftp (default 21). And set ip address, from which can connect. So if hacker uses other ip address, he can not access even knowing username and password? Regarding emails not sure. If I send email, using Gmail (Send mail as: (Use Gmail to send from your other email addresses)), then port 25 not necessary. For incoming emails at dynadot.com I use Email Forwarding. Does it mean that emails “does not arrive to VPS” (before arriving to VPS, emails are forwarded, for example to Gmail)? If emails does not arrive to VPS, then seems port 110 also not necessary. If use only ssl, must open port 443 and close port 80. Do not understand regarding port 3306 In PuTTY with /bin/netstat -lnp see Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 992/mysqld As understand it is for mysql. But does not remember that I have opened such port (may be when installed mysql, the port is opened automatically?). Mysql is installed on the same server, where all other content. Need to understand regarding port 3306 2. Also hacker may be able access console terminal through VPS hosting provider Control Panel (serial console emergency access). As understand only using console terminal (PuTTY, etc.) can make “global” changes (changes that can not modify with ftp). 3. Hacker can access to my VPS exploiting some hole in my php code and uploading, for example, Trojan. Unfortunately, faced situation that VPS was hacked. As understand it was because I used ZPanel. On VPS ( \etc\zpanel\panel\bin) ) found one php file, that was identified as Trojan by some virus scanners (at virustotal.com). Experimented with the file on local computer (wamp). And appears that hacker can see all content of VPS, rename, delete, upload etc. From my opinion, if in PuTTY use command like chattr +i /etc/php.ini then hacker could not be able to modify php.ini. Is there any other way to get into VPS?

    Read the article

  • nginx server over https using up all available file handles (upd: infinite loop?)

    - by mmr
    Hi all, So I have an nginx server that's working over https with Sinatra. When I try to download a jnlp file in a configuration that works fine over Mongrel and http (no s), the nginx server fails to serve the file with a 504 error. Subsequent checking of the logs states that this error is due to overflowing the available number of file handles, ie, "24: too many open files". Running sudo lsof -p <nginx worker pid> gets me a huge list of files, all looking like: nginx 1771 nobody 11u IPv4 10867997 0t0 TCP localhost:44704->localhost:https (ESTABLISHED) nginx 1771 nobody 12u IPv4 10868113 0t0 TCP localhost:https->localhost:44704 (ESTABLISHED) nginx 1771 nobody 13u IPv4 10868114 0t0 TCP localhost:44705->localhost:https (ESTABLISHED) nginx 1771 nobody 14u IPv4 10868191 0t0 TCP localhost:https->localhost:44705 (ESTABLISHED) nginx 1771 nobody 15u IPv4 10868192 0t0 TCP localhost:44706->localhost:https (ESTABLISHED) nginx 1771 nobody 16u IPv4 10868255 0t0 TCP localhost:https->localhost:44706 (ESTABLISHED) nginx 1771 nobody 17u IPv4 10868256 0t0 TCP localhost:44707->localhost:https (ESTABLISHED) nginx 1771 nobody 18u IPv4 10868330 0t0 TCP localhost:https->localhost:44707 (ESTABLISHED) nginx 1771 nobody 19u IPv4 10868331 0t0 TCP localhost:44708->localhost:https (ESTABLISHED) nginx 1771 nobody 20u IPv4 10868434 0t0 TCP localhost:https->localhost:44708 (ESTABLISHED) Increasing the number of files that can be opened is no help, because then nginx just blows right past that limit. And no wonder, it looks like it's in some kind of loop to pull all available files. Any idea what's going on, and how to fix it? EDIT: nginx 0.7.63, ubuntu linux, sinatra 1.0 EDIT 2: Here's the offending code. It's sinatra serving jnlp, which I finally figured out: get '/uploader' do #read in the launch.jnlp file theJNLP = "" File.open("/launch.jnlp", "r+") do |file| while theTemp = file.gets theJNLP = theJNLP + theTemp end end content_type :jnlp theJNLP end If I serve this with Sinatra via Mongrel and http, everything works fine. If I serve this with Sinatra and nginx via https, I get the above error. All other parts of the website appear to be equivalent. EDIT: I have since upgraded to passenger 2.2.14, ruby 1.9.1, nginx 0.8.40, openssl 1.0.0a, and no change. EDIT: The culprit appears to be infinite redirects due to using SSL. I don't know how to fix this, other than hosting the jnlp file in the root directory of the server (which I'd rather not do, since it limits me to one jnlp-based app at a time). The relevant lines from nginx.conf: # HTTPS server # server { listen 443; server_name MyServer.org root /My/Root/Dir; passenger_enabled on; expires 1d; proxy_set_header X-FORWARDED_PROTO https; proxy_set_header X_FORWARDED_PROTO https;#the almighty google is not clear on which to use location /upload { proxy_pass https://127.0.0.1:443; } } The funny thing about this is, first, I was putting the jnlp into a directory called 'uploader', not 'upload', but that still appeared to trigger the problem, since that proxy_pass directive appeared in the logs. Second, again, moving the jnlp into root avoided the problem, because there wasn't any of this proxying due to ssl. So, how can I avoid the infinite proxy_pass loop in nginx?

    Read the article

  • obtaining nimbuzz server certificate for nmdecrypt expert in NetMon

    - by lurscher
    I'm using Network Monitor 3.4 with the nmdecrypt expert. I'm opening a nimbuzz conversation node in the conversation window and i click Expert- nmDecrpt - run Expert that shows up a window where i have to add the server certificate. I am not sure how to retrieve the server certificate for nimbuzz XMPP chat service. Any idea how to do this? this question is a follow up question of this one. Edit for some background so it might be that this is encrypted with the server pubkey and i cannot retrieve the message, unless i debug the native binary and try to intercept the encryption code. I have a test client (using agsXMPP) that is able to connect with nimbuzz with no problems. the only thing that is not working is adding invisible mode. It seems this is some packet sent from the official client during login which i want to obtain. any suggestions to try to grab this info would be greatly appreciated. Maybe i should get myself (and learn) IDA pro? This is what i get inspecting the TLS frames on Network Monitor: Frame: Number = 81, Captured Frame Length = 769, MediaType = ETHERNET + Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[...],SourceAddress:[....] + Ipv4: Src = ..., Dest = 192.168.2.101, Next Protocol = TCP, Packet ID = 9939, Total IP Length = 755 - Tcp: Flags=...AP..., SrcPort=5222, DstPort=3578, PayloadLen=715, Seq=4101074854 - 4101075569, Ack=1127356300, Win=4050 (scale factor 0x0) = 4050 SrcPort: 5222 DstPort: 3578 SequenceNumber: 4101074854 (0xF4716FA6) AcknowledgementNumber: 1127356300 (0x4332178C) + DataOffset: 80 (0x50) + Flags: ...AP... Window: 4050 (scale factor 0x0) = 4050 Checksum: 0x8841, Good UrgentPointer: 0 (0x0) TCPPayload: SourcePort = 5222, DestinationPort = 3578 TLSSSLData: Transport Layer Security (TLS) Payload Data - TLS: TLS Rec Layer-1 HandShake: Server Hello.; TLS Rec Layer-2 HandShake: Certificate.; TLS Rec Layer-3 HandShake: Server Hello Done. - TlsRecordLayer: TLS Rec Layer-1 HandShake: ContentType: HandShake: - Version: TLS 1.0 Major: 3 (0x3) Minor: 1 (0x1) Length: 42 (0x2A) - SSLHandshake: SSL HandShake ServerHello(0x02) HandShakeType: ServerHello(0x02) Length: 38 (0x26) - ServerHello: 0x1 + Version: TLS 1.0 + RandomBytes: SessionIDLength: 0 (0x0) TLSCipherSuite: TLS_RSA_WITH_AES_256_CBC_SHA { 0x00, 0x35 } CompressionMethod: 0 (0x0) - TlsRecordLayer: TLS Rec Layer-2 HandShake: ContentType: HandShake: - Version: TLS 1.0 Major: 3 (0x3) Minor: 1 (0x1) Length: 654 (0x28E) - SSLHandshake: SSL HandShake Certificate(0x0B) HandShakeType: Certificate(0x0B) Length: 650 (0x28A) - Cert: 0x1 CertLength: 647 (0x287) - Certificates: CertificateLength: 644 (0x284) - X509Cert: Issuer: nimbuzz.com,Nimbuzz,NL, Subject: nimbuzz.com,Nimbuzz,NL + SequenceHeader: - TbsCertificate: Issuer: nimbuzz.com,Nimbuzz,NL, Subject: nimbuzz.com,Nimbuzz,NL + SequenceHeader: + Tag0: + Version: (2) + SerialNumber: -1018418383 + Signature: Sha1WithRSAEncryption (1.2.840.113549.1.1.5) - Issuer: nimbuzz.com,Nimbuzz,NL - RdnSequence: nimbuzz.com,Nimbuzz,NL + SequenceOfHeader: 0x1 + Name: NL + Name: Nimbuzz + Name: nimbuzz.com + Validity: From: 02/22/10 20:22:32 UTC To: 02/20/20 20:22:32 UTC + Subject: nimbuzz.com,Nimbuzz,NL - SubjectPublicKeyInfo: RsaEncryption (1.2.840.113549.1.1.1) + SequenceHeader: + Algorithm: RsaEncryption (1.2.840.113549.1.1.1) - SubjectPublicKey: - AsnBitStringHeader: - AsnId: BitString type (Universal 3) - LowTag: Class: (00......) Universal (0) Type: (..0.....) Primitive TagValue: (...00011) 3 - AsnLen: Length = 141, LengthOfLength = 1 LengthType: LengthOfLength = 1 Length: 141 bytes BitString: + Tag3: + Extensions: - SignatureAlgorithm: Sha1WithRSAEncryption (1.2.840.113549.1.1.5) - SequenceHeader: - AsnId: Sequence and SequenceOf types (Universal 16) + LowTag: - AsnLen: Length = 13, LengthOfLength = 0 Length: 13 bytes, LengthOfLength = 0 + Algorithm: Sha1WithRSAEncryption (1.2.840.113549.1.1.5) - Parameters: Null Value - Sha1WithRSAEncryption: Null Value + AsnNullHeader: - Signature: - AsnBitStringHeader: - AsnId: BitString type (Universal 3) - LowTag: Class: (00......) Universal (0) Type: (..0.....) Primitive TagValue: (...00011) 3 - AsnLen: Length = 129, LengthOfLength = 1 LengthType: LengthOfLength = 1 Length: 129 bytes BitString: + TlsRecordLayer: TLS Rec Layer-3 HandShake:

    Read the article

  • Windows 7, HTTPS WebDav: Asks for password twice and fails. Any workarounds?

    - by AutoDMC
    Howdy. I have a Dav server running with PHP SabreDav (code.google.com/p/sabredav/wiki/Windows) on Cherokee at an HTTPS secured URL. It's set to use https, and uses Digest Authentication. I can log in with multiple browsers and a few third party clients (BitKinex and Java AnyClient can connect and browse as well, caveats below). However, when attempting to log in with Windows 7 (surprise, surprise), it asks for my password twice, then tells me that my folder is invalid. I have verified that the server is using Digest authentication. I've verified multiple times that third party software can connect. I even went out and bought a GoDaddy SSL certificate so my SSL wouldn't be self signed anymore. I've applied the registry hacks here: support.microsoft.com/kb/943280 (Note that the article says the "fix" already exists for Windows 7, I just need magical registry hax to get it to work) I've applied the registry hacks here: support.microsoft.com/kb/941050 I've applied the registry hacks here: support.microsoft.com/kb/841215 (Supposedly allows Basic Auth, which shouldn't apply, but why not?) All to no avail; Windows continues to ask for my password twice, then state that "The folder you entered does not appear to be valid. Please choose another." Try the command line? Sure: I've attempted to access with NET USE "https://dav.example.com/" password /USER:me (System error 59) I've attempted to access with NET USE "https://dav.example.com/" (System error 1790) I've attempted to access with NET USE "https://dav.example.com/subdir/" password /USER:me (System error 59) I've attempted to access with NET USE "https://dav.example.com/subdir/" (System error 1790) For good luck: ping dav.example.com ... works. And again, web browsers can access the share just fine, so can third party tools. Best I can tell at this point is "HAHA, NO WEBDAV FOR YOU ON WINDOWS 7" which would be fine except everyone who will be using this application... uses Windows 7. And most are not as persistent or pugnacious as I am. I feel like I've burned through every random suggestion I've found anywhere in the first 10 pages of Google on every search term I can think of. Any ideas? I need it to be Webdav, I need it to be over HTTPS, and I really do need a method to access it from Windows 7. EXTRA DETAIL: However, the "third party" programs I've tried have either been buggy, incomplete, or have silly ... "glitches." For example, BitKinex seems to fixate on any http error codes sent, so if there's a glitch reading a directory, BAM, that directory is always listed empty. Long directory listings also show up as blank, even though the transfer panel shows that directory listing is still taking place. In any case, BitKinex is useless for development purposes for the reasons above. And besides, I'm building this for people other than myself, people who will want to get this dav share working "the regular way."

    Read the article

  • how to set keyboard for phonetic Hindi typing oneiric for Wx keyboard on QWERTY keyboard

    - by Registered User
    I am trying to type documents in Hindi language. My OS: Ubuntu 11.10 Gnome environment (http://packages.ubuntu.com/search?keywords=gnome-session-fallback) I do not use unity interface.The method is shown here http://www.youtube.com/watch?v=LL7icGNhIfI I am able to type in Hindi in Libreoffice and gedit as well with method shown above.But this is a very difficult way of typing because I have to remember all the English Keys corresponding to the Hindi words as mapped here http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html-single/International_Language_Support_Guide/images/hindi.png What I want to be able to do is type phonetically and not use above kind of keyboard. I have US English keyboard in my laptop. See the snapshot here https://picasaweb.google.com/107404068162388981296/UnknownAsianLanguage#5704771437325752466 I have selected the phonetic input method in Ibus window but this still is not working as expected. I expect to be able to type phonetically (given with above phonetic selection) what is happening is I have to type like using a QWERTY keyboard for Hindi language which is deviation from expected behavior. How can I rectify or achieve correct behavior?

    Read the article

  • WCF WS-Security and WSE Nonce Authentication

    - by Rick Strahl
    WCF makes it fairly easy to access WS-* Web Services, except when you run into a service format that it doesn't support. Even then WCF provides a huge amount of flexibility to make the service clients work, however finding the proper interfaces to make that happen is not easy to discover and for the most part undocumented unless you're lucky enough to run into a blog, forum or StackOverflow post on the matter. This is definitely true for the Password Nonce as part of the WS-Security/WSE protocol, which is not natively supported in WCF. Specifically I had a need to create a WCF message on the client that includes a WS-Security header that looks like this from their spec document:<soapenv:Header> <wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-8" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>TeStUsErNaMe1</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >TeStPaSsWoRd1</wsse:Password> <wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >f8nUe3YupTU5ISdCy3X9Gg==</wsse:Nonce> <wsu:Created>2011-05-04T19:01:40.981Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> Specifically, the Nonce and Created keys are what WCF doesn't create or have a built in formatting for. Why is there a nonce? My first thought here was WTF? The username and password are there in clear text, what does the Nonce accomplish? The Nonce and created keys are are part of WSE Security specification and are meant to allow the server to detect and prevent replay attacks. The hashed nonce should be unique per request which the server can store and check for before running another request thus ensuring that a request is not replayed with exactly the same values. Basic ServiceUtl Import - not much Luck The first thing I did when I imported this service with a service reference was to simply import it as a Service Reference. The Add Service Reference import automatically detects that WS-Security is required and appropariately adds the WS-Security to the basicHttpBinding in the config file:<?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="RealTimeOnlineSoapBinding"> <security mode="Transport" /> </binding> <binding name="RealTimeOnlineSoapBinding1" /> </basicHttpBinding> </bindings> <client> <endpoint address="https://notarealurl.com:443/services/RealTimeOnline" binding="basicHttpBinding" bindingConfiguration="RealTimeOnlineSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> </configuration> If if I run this as is using code like this:var client = new RealTimeOnlineClient(); client.ClientCredentials.UserName.UserName = "TheUsername"; client.ClientCredentials.UserName.Password = "ThePassword"; … I get nothing in terms of WS-Security headers. The request is sent, but the the binding expects transport level security to be applied, rather than message level security. To fix this so that a WS-Security message header is sent the security mode can be changed to: <security mode="TransportWithMessageCredential" /> Now if I re-run I at least get a WS-Security header which looks like this:<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2012-11-24T02:55:18.011Z</u:Created> <u:Expires>2012-11-24T03:00:18.011Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-18c215d4-1106-40a5-8dd1-c81fdddf19d3-1"> <o:Username>TheUserName</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> Closer! Now the WS-Security header is there along with a timestamp field (which might not be accepted by some WS-Security expecting services), but there's no Nonce or created timestamp as required by my original service. Using a CustomBinding instead My next try was to go with a CustomBinding instead of basicHttpBinding as it allows a bit more control over the protocol and transport configurations for the binding. Specifically I can explicitly specify the message protocol(s) used. Using configuration file settings here's what the config file looks like:<?xml version="1.0"?> <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="CustomSoapBinding"> <security includeTimestamp="false" authenticationMode="UserNameOverTransport" defaultAlgorithmSuite="Basic256" requireDerivedKeys="false" messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10"> </security> <textMessageEncoding messageVersion="Soap11"></textMessageEncoding> <httpsTransport maxReceivedMessageSize="2000000000"/> </binding> </customBinding> </bindings> <client> <endpoint address="https://notrealurl.com:443/services/RealTimeOnline" binding="customBinding" bindingConfiguration="CustomSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup> </configuration> This ends up creating a cleaner header that's missing the timestamp field which can cause some services problems. The WS-Security header output generated with the above looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-291622ca-4c11-460f-9886-ac1c78813b24-1"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> This is closer as it includes only the username and password. The key here is the protocol for WS-Security:messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" which explicitly specifies the protocol version. There are several variants of this specification but none of them seem to support the nonce unfortunately. This protocol does allow for optional omission of the Nonce and created timestamp provided (which effectively makes those keys optional). With some services I tried that requested a Nonce just using this protocol actually worked where the default basicHttpBinding failed to connect, so this is a possible solution for access to some services. Unfortunately for my target service that was not an option. The nonce has to be there. Creating Custom ClientCredentials As it turns out WCF doesn't have support for the Digest Nonce as part of WS-Security, and so as far as I can tell there's no way to do it just with configuration settings. I did a bunch of research on this trying to find workarounds for this, and I did find a couple of entries on StackOverflow as well as on the MSDN forums. However, none of these are particularily clear and I ended up using bits and pieces of several of them to arrive at a working solution in the end. http://stackoverflow.com/questions/896901/wcf-adding-nonce-to-usernametoken http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4df3354f-0627-42d9-b5fb-6e880b60f8ee The latter forum message is the more useful of the two (the last message on the thread in particular) and it has most of the information required to make this work. But it took some experimentation for me to get this right so I'll recount the process here maybe a bit more comprehensively. In order for this to work a number of classes have to be overridden: ClientCredentials ClientCredentialsSecurityTokenManager WSSecurityTokenizer The idea is that we need to create a custom ClientCredential class to hold the custom properties so they can be set from the UI or via configuration settings. The TokenManager and Tokenizer are mainly required to allow the custom credentials class to flow through the WCF pipeline and eventually provide custom serialization. Here are the three classes required and their full implementations:public class CustomCredentials : ClientCredentials { public CustomCredentials() { } protected CustomCredentials(CustomCredentials cc) : base(cc) { } public override System.IdentityModel.Selectors.SecurityTokenManager CreateSecurityTokenManager() { return new CustomSecurityTokenManager(this); } protected override ClientCredentials CloneCore() { return new CustomCredentials(this); } } public class CustomSecurityTokenManager : ClientCredentialsSecurityTokenManager { public CustomSecurityTokenManager(CustomCredentials cred) : base(cred) { } public override System.IdentityModel.Selectors.SecurityTokenSerializer CreateSecurityTokenSerializer(System.IdentityModel.Selectors.SecurityTokenVersion version) { return new CustomTokenSerializer(System.ServiceModel.Security.SecurityVersion.WSSecurity11); } } public class CustomTokenSerializer : WSSecurityTokenSerializer { public CustomTokenSerializer(SecurityVersion sv) : base(sv) { } protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); // in this case password is plain text // for digest mode password needs to be encoded as: // PasswordAsDigest = Base64(SHA-1(Nonce + Created + Password)) // and profile needs to change to //string password = GetSHA1String(nonce + createdStr + userToken.Password); string password = userToken.Password; writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } protected string GetSHA1String(string phrase) { SHA1CryptoServiceProvider sha1Hasher = new SHA1CryptoServiceProvider(); byte[] hashedDataBytes = sha1Hasher.ComputeHash(Encoding.UTF8.GetBytes(phrase)); return Convert.ToBase64String(hashedDataBytes); } } Realistically only the CustomTokenSerializer has any significant code in. The code there deals with actually serializing the custom credentials using low level XML semantics by writing output into an XML writer. I can't take credit for this code - most of the code comes from the MSDN forum post mentioned earlier - I made a few adjustments to simplify the nonce generation and also added some notes to allow for PasswordDigest generation. Per spec the nonce is nothing more than a unique value that's supposed to be 'random'. I'm thinking that this value can be any string that's unique and a GUID on its own probably would have sufficed. Comments on other posts that GUIDs can be potentially guessed are highly exaggerated to say the least IMHO. To satisfy even that aspect though I added the SHA1 encryption and binary decoding to give a more random value that would be impossible to 'guess'. The original example from the forum post used another level of encoding and decoding to string in between - but that really didn't accomplish anything but extra overhead. The header output generated from this looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-f43d8b0d-0ebb-482e-998d-f544401a3c91-1" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">ThePassword</o:Password> <o:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >PjVE24TC6HtdAnsf3U9c5WMsECY=</o:Nonce> <u:Created>2012-11-23T07:10:04.670Z</u:Created> </o:UsernameToken> </o:Security> </s:Header> which is exactly as it should be. Password Digest? In my case the password is passed in plain text over an SSL connection, so there's no digest required so I was done with the code above. Since I don't have a service handy that requires a password digest,  I had no way of testing the code for the digest implementation, but here is how this is likely to work. If you need to pass a digest encoded password things are a little bit trickier. The password type namespace needs to change to: http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest and then the password value needs to be encoded. The format for password digest encoding is this: Base64(SHA-1(Nonce + Created + Password)) and it can be handled in the code above with this code (that's commented in the snippet above): string password = GetSHA1String(nonce + createdStr + userToken.Password); The entire WriteTokenCore method for digest code looks like this:protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); string password = GetSHA1String(nonce + createdStr + userToken.Password); writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } I had no service to connect to to try out Digest auth - if you end up needing it and get it to work please drop a comment… How to use the custom Credentials The easiest way to use the custom credentials is to create the client in code. Here's a factory method I use to create an instance of my service client:  public static RealTimeOnlineClient CreateRealTimeOnlineProxy(string url, string username, string password) { if (string.IsNullOrEmpty(url)) url = "https://notrealurl.com:443/cows/services/RealTimeOnline"; CustomBinding binding = new CustomBinding(); var security = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); security.IncludeTimestamp = false; security.DefaultAlgorithmSuite = SecurityAlgorithmSuite.Basic256; security.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10; var encoding = new TextMessageEncodingBindingElement(); encoding.MessageVersion = MessageVersion.Soap11; var transport = new HttpsTransportBindingElement(); transport.MaxReceivedMessageSize = 20000000; // 20 megs binding.Elements.Add(security); binding.Elements.Add(encoding); binding.Elements.Add(transport); RealTimeOnlineClient client = new RealTimeOnlineClient(binding, new EndpointAddress(url)); // to use full client credential with Nonce uncomment this code: // it looks like this might not be required - the service seems to work without it client.ChannelFactory.Endpoint.Behaviors.Remove<System.ServiceModel.Description.ClientCredentials>(); client.ChannelFactory.Endpoint.Behaviors.Add(new CustomCredentials()); client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; return client; } This returns a service client that's ready to call other service methods. The key item in this code is the ChannelFactory endpoint behavior modification that that first removes the original ClientCredentials and then adds the new one. The ClientCredentials property on the client is read only and this is the way it has to be added.   Summary It's a bummer that WCF doesn't suport WSE Security authentication with nonce values out of the box. From reading the comments in posts/articles while I was trying to find a solution, I found that this feature was omitted by design as this protocol is considered unsecure. While I agree that plain text passwords are rarely a good idea even if they go over secured SSL connection as WSE Security does, there are unfortunately quite a few services (mosly Java services I suspect) that use this protocol. I've run into this twice now and trying to find a solution online I can see that this is not an isolated problem - many others seem to have struggled with this. It seems there are about a dozen questions about this on StackOverflow all with varying incomplete answers. Hopefully this post provides a little more coherent content in one place. Again I marvel at WCF and its breadth of support for protocol features it has in a single tool. And even when it can't handle something there are ways to get it working via extensibility. But at the same time I marvel at how freaking difficult it is to arrive at these solutions. I mean there's no way I could have ever figured this out on my own. It takes somebody working on the WCF team or at least being very, very intricately involved in the innards of WCF to figure out the interconnection of the various objects to do this from scratch. Luckily this is an older problem that has been discussed extensively online and I was able to cobble together a solution from the online content. I'm glad it worked out that way, but it feels dirty and incomplete in that there's a whole learning path that was omitted to get here… Man am I glad I'm not dealing with SOAP services much anymore. REST service security - even when using some sort of federation is a piece of cake by comparison :-) I'm sure once standards bodies gets involved we'll be right back in security standard hell…© Rick Strahl, West Wind Technologies, 2005-2012Posted in WCF  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Controlling Brightness in Sony Vaio SVE151A11W |Not Working|

    - by Rabimba Karanjai
    Disclaimer First: I have gone thorugh these following threads here and followed all of their advice but none of them worked Brightness Control Problem on Sony VAIO with NVIDIA GT-320M unable to change brightness settings in sony vaio e series laptop Brightness doesn't change on Sony laptop Now the Problem I have installed Ubuntu 12.10 (64bit) in this Sony Vaio laptop. After vanilla isntallation the brightness keys are workinga nd I can see the brightness indictaor going up and down buit its isn't having any effect on the real brightness of the device. I have installer additional drivers too but that didn't solve the problem. I can't seem to be able to change the brightness. Anyone knows how I can fix this?

    Read the article

  • Using BPEL Performance Statistics to Diagnose Performance Bottlenecks

    - by fip
    Tuning performance of Oracle SOA 11G applications could be challenging. Because SOA is a platform for you to build composite applications that connect many applications and "services", when the overall performance is slow, the bottlenecks could be anywhere in the system: the applications/services that SOA connects to, the infrastructure database, or the SOA server itself.How to quickly identify the bottleneck becomes crucial in tuning the overall performance. Fortunately, the BPEL engine in Oracle SOA 11G (and 10G, for that matter) collects BPEL Engine Performance Statistics, which show the latencies of low level BPEL engine activities. The BPEL engine performance statistics can make it a bit easier for you to identify the performance bottleneck. Although the BPEL engine performance statistics are always available, the access to and interpretation of them are somewhat obscure in the early and current (PS5) 11G versions. This blog attempts to offer instructions that help you to enable, retrieve and interpret the performance statistics, before the future versions provides a more pleasant user experience. Overview of BPEL Engine Performance Statistics  SOA BPEL has a feature of collecting some performance statistics and store them in memory. One MBean attribute, StatLastN, configures the size of the memory buffer to store the statistics. This memory buffer is a "moving window", in a way that old statistics will be flushed out by the new if the amount of data exceeds the buffer size. Since the buffer size is limited by StatLastN, impacts of statistics collection on performance is minimal. By default StatLastN=-1, which means no collection of performance data. Once the statistics are collected in the memory buffer, they can be retrieved via another MBean oracle.as.soainfra.bpel:Location=[Server Name],name=BPELEngine,type=BPELEngine.> My friend in Oracle SOA development wrote this simple 'bpelstat' web app that looks up and retrieves the performance data from the MBean and displays it in a human readable form. It does not have beautiful UI but it is fairly useful. Although in Oracle SOA 11.1.1.5 onwards the same statistics can be viewed via a more elegant UI under "request break down" at EM -> SOA Infrastructure -> Service Engines -> BPEL -> Statistics, some unsophisticated minds like mine may still prefer the simplicity of the 'bpelstat' JSP. One thing that simple JSP does do well is that you can save the page and send it to someone to further analyze Follows are the instructions of how to install and invoke the BPEL statistic JSP. My friend in SOA Development will soon blog about interpreting the statistics. Stay tuned. Step1: Enable BPEL Engine Statistics for Each SOA Servers via Enterprise Manager First st you need to set the StatLastN to some number as a way to enable the collection of BPEL Engine Performance Statistics EM Console -> soa-infra(Server Name) -> SOA Infrastructure -> SOA Administration -> BPEL Properties Click on "More BPEL Configuration Properties" Click on attribute "StatLastN", set its value to some integer number. Typically you want to set it 1000 or more. Step 2: Download and Deploy bpelstat.war File to Admin Server, Note: the WAR file contains a JSP that does NOT have any security restriction. You do NOT want to keep in your production server for a long time as it is a security hazard. Deactivate the war once you are done. Download the bpelstat.war to your local PC At WebLogic Console, Go to Deployments -> Install Click on the "upload your file(s)" Click the "Browse" button to upload the deployment to Admin Server Accept the uploaded file as the path, click next Check the default option "Install this deployment as an application" Check "AdminServer" as the target server Finish the rest of the deployment with default settings Console -> Deployments Check the box next to "bpelstat" application Click on the "Start" button. It will change the state of the app from "prepared" to "active" Step 3: Invoke the BPEL Statistic Tool The BPELStat tool merely call the MBean of BPEL server and collects and display the in-memory performance statics. You usually want to do that after some peak loads. Go to http://<admin-server-host>:<admin-server-port>/bpelstat Enter the correct admin hostname, port, username and password Enter the SOA Server Name from which you want to collect the performance statistics. For example, SOA_MS1, etc. Click Submit Keep doing the same for all SOA servers. Step 3: Interpret the BPEL Engine Statistics You will see a few categories of BPEL Statistics from the JSP Page. First it starts with the overall latency of BPEL processes, grouped by synchronous and asynchronous processes. Then it provides the further break down of the measurements through the life time of a BPEL request, which is called the "request break down". 1. Overall latency of BPEL processes The top of the page shows that the elapse time of executing the synchronous process TestSyncBPELProcess from the composite TestComposite averages at about 1543.21ms, while the elapse time of executing the asynchronous process TestAsyncBPELProcess from the composite TestComposite2 averages at about 1765.43ms. The maximum and minimum latency were also shown. Synchronous process statistics <statistics>     <stats key="default/TestComposite!2.0.2-ScopedJMSOSB*soa_bfba2527-a9ba-41a7-95c5-87e49c32f4ff/TestSyncBPELProcess" min="1234" max="4567" average="1543.21" count="1000">     </stats> </statistics> Asynchronous process statistics <statistics>     <stats key="default/TestComposite2!2.0.2-ScopedJMSOSB*soa_bfba2527-a9ba-41a7-95c5-87e49c32f4ff/TestAsyncBPELProcess" min="2234" max="3234" average="1765.43" count="1000">     </stats> </statistics> 2. Request break down Under the overall latency categorized by synchronous and asynchronous processes is the "Request breakdown". Organized by statistic keys, the Request breakdown gives finer grain performance statistics through the life time of the BPEL requests.It uses indention to show the hierarchy of the statistics. Request breakdown <statistics>     <stats key="eng-composite-request" min="0" max="0" average="0.0" count="0">         <stats key="eng-single-request" min="22" max="606" average="258.43" count="277">             <stats key="populate-context" min="0" max="0" average="0.0" count="248"> Please note that in SOA 11.1.1.6, the statistics under Request breakdown is aggregated together cross all the BPEL processes based on statistic keys. It does not differentiate between BPEL processes. If two BPEL processes happen to have the statistic that share same statistic key, the statistics from two BPEL processes will be aggregated together. Keep this in mind when we go through more details below. 2.1 BPEL process activity latencies A very useful measurement in the Request Breakdown is the performance statistics of the BPEL activities you put in your BPEL processes: Assign, Invoke, Receive, etc. The names of the measurement in the JSP page directly come from the names to assign to each BPEL activity. These measurements are under the statistic key "actual-perform" Example 1:  Follows is the measurement for BPEL activity "AssignInvokeCreditProvider_Input", which looks like the Assign activity in a BPEL process that assign an input variable before passing it to the invocation:                                <stats key="AssignInvokeCreditProvider_Input" min="1" max="8" average="1.9" count="153">                                     <stats key="sensor-send-activity-data" min="0" max="1" average="0.0" count="306">                                     </stats>                                     <stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                 </stats> Note: because as previously mentioned that the statistics cross all BPEL processes are aggregated together based on statistic keys, if two BPEL processes happen to name their Invoke activity the same name, they will show up at one measurement (i.e. statistic key). Example 2: Follows is the measurement of BPEL activity called "InvokeCreditProvider". You can not only see that by average it takes 3.31ms to finish this call (pretty fast) but also you can see from the further break down that most of this 3.31 ms was spent on the "invoke-service".                                  <stats key="InvokeCreditProvider" min="1" max="13" average="3.31" count="153">                                     <stats key="initiate-correlation-set-again" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="invoke-service" min="1" max="13" average="3.08" count="153">                                         <stats key="prep-call" min="0" max="1" average="0.04" count="153">                                         </stats>                                     </stats>                                     <stats key="initiate-correlation-set" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="sensor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                     <stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                     <stats key="update-audit-trail" min="0" max="2" average="0.03" count="153">                                     </stats>                                 </stats> 2.2 BPEL engine activity latency Another type of measurements under Request breakdown are the latencies of underlying system level engine activities. These activities are not directly tied to a particular BPEL process or process activity, but they are critical factors in the overall engine performance. These activities include the latency of saving asynchronous requests to database, and latency of process dehydration. My friend Malkit Bhasin is working on providing more information on interpreting the statistics on engine activities on his blog (https://blogs.oracle.com/malkit/). I will update this blog once the information becomes available. Update on 2012-10-02: My friend Malkit Bhasin has published the detail interpretation of the BPEL service engine statistics at his blog http://malkit.blogspot.com/2012/09/oracle-bpel-engine-soa-suite.html.

    Read the article

  • Evolution of Apple: A Fan Spliced Mega Tribute to the Apple Product Lineup

    - by Jason Fitzpatrick
    Whether you’re an Apple fan or not, this 3.5 minute tribute to the evolution of Apple products is a neat look back at decades of computing history and iconic design. Put together by Apple fan August Brandels, the video splices together Apple commercials and promotional footage from the last 30 years (remixed against the catchy background tune Silhouettes by Avicii) into a mega tribute to the computer giant. If nothing else they should hire the guy to do motivational videos for annual employee meetings. [via Tech Crunch] HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • HP Compaq 2510p wireless disabled by hardware switch

    - by mylovelyhorse
    I have an HP Compaq 2510p laptop running ubuntu 12.04 LTS. ubuntu reports that wireless is disabled by means of a hardware switch. There is a 'soft-key' button on the laptop to control the physical wireless hardware but this does not respond. There is no other button, slider, (fn)+ combination to control the physical wireless hardware. There is no BIOS function to disable wireless (and on XP - previous OS - wireless functioned fine). mike@ubuntu:~$ rfkill list all 0: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: yes 1: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no Running rfkill unblock all doesn't change things and I can see no way to change use from 0: to 1: (if that's even possible - or desirable - in the first place). I have checked for additional drivers and the Broadcom proprietary wireless driver is already installed and has a green light. Essentially, I believe I need to make the HP 'soft-keys' work - or at least the wireless card toggle. Advice gratefully received. Cheers M

    Read the article

  • Silverlight Cream for December 05, 2010 -- #1003

    - by Dave Campbell
    In this (Almost) All-Submittal Issue: John Papa(-2-), Jesse Liberty, Tim Heuer, Dan Wahlin, Markus Egger, Phil Middlemiss, Coding4Fun, Michael Washington, Gill Cleeren, MichaelD!, Colin Eberhardt, Kunal Chowdhury, and Rabeeh Abla. Above the Fold: Silverlight: "Two-Way Binding on TreeView.SelectedItem" Phil Middlemiss WP7: "Taking Screen Shots of Windows Phone 7 Panorama Apps" Markus Egger Training: "Beginners Guide to Visual Studio LightSwitch (Part - 4)" Kunal Chowdhury Shoutouts: Don't let the fire go out... check out the Firestarter Labs Bart Czernicki discusses the need for 64-bit Silverlight: Why a 64-bit runtime for Silverlight 5 Matters Laurent Duveau is interviewed by the SilverlightShow folks to discuss his WP7 app: Laurent Duveau on Morse Code Flash Light WP7 Application From SilverlightCream.com: John Papa: Silverlight 5 Features John Papa has a post up highlighting his take on what's cool in the new featureset for Silverlight 5... including an external link to the keynote. Silverlight Firestarter Keynote and Sessions John Papa also has posted links to all the individual session videos... what a great resource! Yet Another Podcast #17 – Scott Guthrie Jesse Liberty went big with his latest Yet Another Podcast ... he is interviewing Scott Guthrie about the Firestarter, Silverlight, WP7. and more. Silverlight 5 Plans Revealed With this post from Tim Heuer, I find myself adding a Silverlight 5 tag... so bring on the fun! ... unless you've been overloaded like I have since last Thursday, you've probably seen this, but what the heck... Silverlight Firestarter Wrap Up and WCF RIA Services Talk Sample Code Phoenix's own Dan Wahlin had a great WCF RIA Services presentation at the Firestarter last week, and his material and lots of other good links are up on his blog, and I'd say that even if he didn't have a couple shoutouts to me in it :) Thanks Dan!! Taking Screen Shots of Windows Phone 7 Panorama Apps Markus Egger helps us all out with a post on how to get screenshots of your WP7 Panorama app... in case you haven't tried it ... it's not as easy as it sounds! Two-Way Binding on TreeView.SelectedItem Phil Middlemiss is back with a post taking some of the mystery out of the TreeView control bound to a data context and dealing with the SelectedItem property... oh yeah, and throw all that into MVVM! Great tutorial as usual, a cool behavior, and all the source. Native Extensions for Microsoft Silverlight Alan Cobb pointed me to a quick post up on the Coding4Fun site about the NESL (Native Extensions for SilverLight) from Microsoft that give access to some cool features of Windows 7 from Silverlight... I added an NESL tag in case other posts appear on this subject. Silverlight Simple Drag And Drop / Or Browse View Model / MVVM File Upload Control Michael Washington has another great tutorial up at CodeProject that expands on prior work he'd done with drag/drop file upload with this post on integrating an updated browse/upload into ViewModel/MVVM projects, all of which is Blendable. The validation story in Silverlight (Part 1) In good news for all of us, Gill Cleeren has started a tutorial series at SilverlightShow on Silverlight Validation. The first one is up discussing the basics... The Common Framework MichaelD! has a WPF/Silverlight framework up with Facebook Authentication, Xaml-driven IOC, T4 synchronous WCF proxies, and WP7 on the roadmap... source on CodePlex, check it out and give him some feedback. Exploring Reactive Extensions (Rx) through Twitter and Bing Maps Mashups If you've been waiting around to learn Rx, Colin Eberhardt has the post up for you (and me)... great tutorial up on Twitter and Bing Maps Mashups ... and all the code... for the twitter immediate app, and also the UKSnow one we showed last week... check out the demo page, and grab the source! Beginners Guide to Visual Studio LightSwitch (Part - 4) Kunal Chowdhury has the 4th part of his Lightswitch tutorial series up at SilverlightShow. In this one, he shows how to integrate multiple tables into a screen. It is here Take Your Silverlight Application Full Screen & intercept all windows keys !! Rabeeh Abla sent me this link to the blog describing a COM exposed library that intercepts all keys when Silverlight is full-screen. There are a few I hit when I'm going through blogs that Ctrl-W (FF) just won't take down and that annoys me... so this might be a solution if you have that problem... worth a look anyway! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • 202 blog articles

    - by mprove
    All my blog articles under blogs.oracle.com since August 2005: 202 blog articles Apr 2012 blogs.oracle.com design patch Mar 2012 Interaction 12 - Critique Mar 2012 Typing. Clicking. Dancing. Feb 2012 Desktop Mobility in Hospitals with Oracle VDI /video Feb 2012 Interaction 12 in Dublin - Highlights of Day 3 Feb 2012 Interaction 12 in Dublin - Highlights of Day 2 Feb 2012 Interaction 12 in Dublin - Highlights of Day 1 Feb 2012 Shit Interaction Designers Say Feb 2012 Tips'n'Tricks for WebCenter #3: How to display custom page titles in Spaces Jan 2012 Tips'n'Tricks for WebCenter #2: How to create an Admin menu in Spaces and save a lot of time Jan 2012 Tips'n'Tricks for WebCenter #1: How to apply custom resources in Spaces Jan 2012 Merry XMas and a Happy 2012! Dec 2011 One Year Oracle SocialChat - The Movie Nov 2011 Frank Ludolph's Last Working Day Nov 2011 Hans Rosling at TED Oct 2011 200 Countries x 200 Years Oct 2011 Blog Aggregation for Desktop Virtualization Oct 2011 Oracle VDI at OOW 2011 Sep 2011 Design for Conversations & Conversations for Design Sep 2011 All Oracle UX Blogs Aug 2011 Farewell Loriot Aug 2011 Oracle VDI 3.3 Overview Aug 2011 Sutherland's Closing Remarks at HyperKult Aug 2011 Surface and Subface Aug 2011 Back to Childhood in UI Design Jul 2011 The Art of Engineering and The Engineering of Art Jul 2011 Oracle VDI Seminar - June-30 Jun 2011 SGD White Paper May 2011 TEDxHamburg Live Feed May 2011 Oracle VDI in 3 Minutes May 2011 Space Ship Earth 2011 May 2011 blog moving times Apr 2011 Frozen tag cloud Apr 2011 Oracle: Hardware Software Complete in 1953 Apr 2011 Interaction Design with Wireframes Apr 2011 A guide to closing down a project Feb 2011 Oracle VDI 3.2.2 Jan 2011 free VDI charts Jan 2011 Sun Founders Panel 2006 Dec 2010 Sutherland on Leadership Dec 2010 SocialChat: Efficiency of E20 Dec 2010 ALWAYS ON Desktop Virtualization Nov 2010 12,000 Desktops at JavaOne Nov 2010 SocialChat on Sharing Best Practices Oct 2010 Globe of Visitors Oct 2010 SocialChat about the Next Big Thing Oct 2010 Oracle VDI UX Story - Wireframes Oct 2010 What's a PC anyway? Oct 2010 SocialChat on Getting Things Done Oct 2010 SocialChat on Infoglut Oct 2010 IT Twenty Twenty Oct 2010 Desktop Virtualization Webcasts from OOW Oct 2010 Oracle VDI 3.2 Overview Sep 2010 Blog Usability Top 7 Sep 2010 100 and counting Aug 2010 Oracle'izing the VDI Blogs Aug 2010 SocialChat on Apple Aug 2010 SocialChat on Video Conferencing Aug 2010 Oracle VDI 3.2 - Features and Screenshots Aug 2010 SocialChat: Don't stop making waves Aug 2010 SocialChat: Giving Back to the Community Aug 2010 SocialChat on Learning in Meetings Aug 2010 iPAD's Natural User Interface Jul 2010 Last day for Sun Microsystems GmbH Jun 2010 SirValUse Celebration Snippets Jun 2010 10 years SirValUse - Happy Birthday! Jun 2010 Wim on Virtualization May 2010 New Home for Oracle VDI Apr 2010 Renaissance Slide Sorter Comments Apr 2010 Unboxing Sun Ray 3 Plus Apr 2010 Desktop Virtualisierung mit Sun VDI 3.1 Apr 2010 Blog Relaunch Mar 2010 Social Messaging Slides from CeBIT Mar 2010 Social Messaging Talk at CeBIT Feb 2010 Welcome Oracle Jan 2010 My last presentation at Sun Jan 2010 Ivan Sutherland on Leadership Jan 2010 Learning French with Sun VDI Jan 2010 Learning Danish with Sun Ray Jan 2010 VDI workshop in Nieuwegein Jan 2010 Happy New Year 2010 Jan 2010 On Creating Slides Dec 2009 Best VDI Ever Nov 2009 How to store the Big Bang Nov 2009 Social Enterprise Tools. Beipiel Sun. Nov 2009 Nov-19 Nov 2009 PDF and ODF links on your blog Nov 2009 Q&A on VDI and MySQL Cluster Nov 2009 Zürich next week: Swiss Intranet Summit 09 Nov 2009 Designing for a Sustainable World - World Usabiltiy Day, Nov-12 Nov 2009 How to export a desktop from VDI 3 Nov 2009 Virtualisation Roadshow in the UK Nov 2009 Project Wonderland at EDUCAUSE 09 Nov 2009 VDI Roadshow in Dublin, Nov-26, 2009 Nov 2009 Sun VDI at EDUCAUSE 09 Nov 2009 Sun VDI 3.1 Architecture and New Features Oct 2009 VDI 3.1 is Early-Access Sep 2009 Virtualization for MySQL on VMware Sep 2009 Silpion & 13. Stock Sommerparty Sep 2009 Sun Ray and VMware View 3.1.1 2009-08-31 New Set of Sun Ray Status Icons 2009-08-25 Virtualizing the VDI Core? 2009-08-23 World Usability Day Hamburg 2009 - CfP 2009-07-16 Rising Sun 2009-07-15 featuring twittermeme 2009-06-19 ISC09 Student Party on June-20 /Hamburg 2009-06-18 Before and behind the curtain of JavaOne 2009-06-09 20k desktops at JavaOne 2009-06-01 sweet microblogging 2009-05-25 VDI 3 - Why you need 3 VDI hosts and what you can do about that? 2009-05-21 IA Konferenz 2009 2009-05-20 Sun VDI 3 UX Story - Power of the Web 2009-05-06 Planet of Sun and Oracle User Experience Design 2009-04-22 Sun VDI 3 UX Story - User Research 2009-04-08 Sun VDI 3 UX Story - Concept Workshops 2009-04-06 Localized documentation for Sun Ray Connector for VMware View Manager 1.1 2009-04-03 Sun VDI 3 Press Release 2009-03-25 Sun VDI 3 launches today! 2009-03-25 Sun Ray Connector for VMware View Manager 1.1 Update 2009-03-11 desktop virtualization wiki relaunch 2009-03-06 VDI 3 at CeBIT hall 6, booth E36 2009-03-02 Keyboard layout problems with Sun Ray Connector for VMware VDM 2009-02-23 wikis.sun.com tips & tricks 2009-02-23 Sun VDI 3 is in Early Access 2009-02-09 VirtualCenter unable to decrypt passwords 2009-02-02 Sun & VMware Desktop Training 2009-01-30 VDI at next09? 2009-01-16 Sun VDI: How to use virtual machines with multiple network adapters 2009-01-07 Sun Ray and VMware View 2009-01-07 Hamburg World Usability Day 2008 - Webcasts 2009-01-06 Sun Ray Connector for VMware VDM slides 2008-12-15 mother of all demos 2008-12-08 Build your own Thumper 2008-12-03 Troubleshooting Sun Ray Connector for VMware VDM 2008-12-02 My Roller Tag Cloud 2008-11-28 Sun Ray Connector: SSL connection to VDM 2008-11-25 Setting up SSL and Sun Ray Connector for VMware VDM 2008-11-13 Inspiration for Today and Tomorrow 2008-10-23 Sun Ray Connector for VMware VDM released 2008-10-14 From Sketchpad to ILoveSketch 2008-10-09 Desktop Virtualization on Xing 2008-10-06 User Experience Forum on Xing 2008-10-06 Sun Ray Connector for VMware VDM certified 2008-09-17 Virtual Clouds over Las Vegas 2008-09-14 Bill Verplank sketches metaphors 2008-09-04 End of Early Access - Sun Ray Connector for VMware 2008-08-27 Early Access: Sun Ray Connector for VMware Virtual Desktop Manager 2008-08-12 Sun Virtual Desktop Connector - Insides on Recycling Part 2 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling Part 3 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling 2008-07-20 lost in wiki space 2008-07-07 Evolution of the Desktop 2008-06-17 Virtual Desktop Webcast 2008-06-16 Woodstock 2008-06-16 What's a Desktop PC anyway? 2008-06-09 Virtual-T-Box 2008-06-05 Virtualization Glossary 2008-05-06 Five User Experience Principles 2008-04-25 Virtualization News Feed 2008-04-21 Acetylcholinesterase - Second Season 2008-04-18 Acetylcholinesterase - End of Signal 2007-12-31 Produkt-Management ist... 2007-10-22 Usability Verbände, Verteiler und Netzwerke. 2007-10-02 The Meaning is the Message 2007-09-28 Visualization Methods 2007-09-10 Inhouse und Open Source Projekte – Usability verankern und Synergien nutzen 2007-09-03 Der Schwabe Darth Vader entdeckt das Virale Marketing 2007-08-29 Dick Hardt 3.0 on Identity 2.0 2007-08-27 quality of written text depends on the tool 2007-07-27 podcasts for reboot9 2007-06-04 It is the user's itch that need to be scratched 2007-05-25 A duel at reboot9 2007-05-14 Taxonomien und Folksonomien - Tagging als neues HCI-Element 2007-05-10 Dueling Interaction Models of Personal-Computing and Web-Computing 2007-03-01 22.März: Weizenbaum. Rebel at Work. /Filmpremiere Hamburg 2007-02-25 Bruce Sterling at UbiComp 2006 /webcast 2006-11-12 FSOSS 2006 /webcasts 2006-11-10 Highway 101 2006-11-09 User Experience Roundtable Hamburg: EuroGEL 2006 2006-11-08 Douglas Adams' Hyperland (BBC 1990) 2006-10-08 Taxonomien und Folksonomien – Tagging als neues HCI-Element 2006-09-13 Usability im Unternehmen 2006-09-13 Doug does HyperScope 2006-08-26 TED Talks and TechTalks 2006-08-21 Kai Krause über seine Freundschaft zu Douglas Adams 2006-07-20 Rebel At Work: Film Portrait on Weizenbaum 2006-07-04 Gabriele Fischer, mp3 2006-06-07 Dick Hardt at ETech 06 2006-06-05 Weinberger: From Control to Conversation 2006-04-16 Eye Tracking at User Experience Roundtable Hamburg 2006-04-14 dropping knowledge 2006-04-09 GEL 2005 2006-03-13 slide photos of reboot7 2006-03-04 Dick Hardt on Identity 2.0 2006-02-28 User Experience Newsletter #13: Versioning 2006-02-03 Ester Dyson on Choice and Happyness 2006-02-02 Requirements-Engineering im Spannungsfeld von Individual- und Produktsoftware 2006-01-15 User Experience Newsletter #12: Intuition Quiz 2005-11-30 User Experience und Requirements-Engineering für Software-Projekte 2005-10-31 Ivan Sutherland on "Research and Fun" 2005-10-18 Ars Electronica / Mensch und Computer 2005 2005-09-14 60 Jahre nach Memex: Über die Unvereinbarkeit von Desktop- und Web-Paradigma 2005-08-31 reboot 7 2005-06-30

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • Smart Grid Gurus

    - by caroline.yu
    Join Paul Fetherland, AMI director at Hawaiian Electric Company (HECO) and Keith Sturkie, vice president of Information Technology, Mid-Carolina Electric Cooperative (MCEC) on Thursday, April 29 at 12 p.m. EDT for the free "Smart Grid Gurus" Webcast. In this Webcast, underwritten by Oracle Utilities, Intelligent Utility will profile Paul Fetherland and Keith Sturkie to examine how they ended up in their respective positions and how they are making smarter grids a reality at their companies. By attending, you will: Gain insight from the paths taken and lessons learned by HECO and MCEC as these two utilities add more grid intelligence to their operations Identify the keys to driving AMI deployment, increasing operational and productivity gains, and targeting new goals on the technology roadmap Learn why HECO is taking a careful, measured approach to AMI deployment, and how Hawaii's established renewable portfolio standard of 40% and an energy efficiency standard of 30%, both by 2030, impact its efforts Discover how MCEC's 45,000-meter AMI deployment, completed in 2005, reduced field trips for high-usage complaints by 90% in the first year, and MCEC's immediate goals for future technology implementation To register, please follow this link.

    Read the article

  • Posting from ASP.NET WebForms page to another URL

    - by hajan
    Few days ago I had a case when I needed to make FORM POST from my ASP.NET WebForms page to an external site URL. More specifically, I was working on implementing Simple Payment System (like Amazon, PayPal, MoneyBookers). The operator asks to make FORM POST request to a given URL in their website, sending parameters together with the post which are computed on my application level (access keys, secret keys, signature, return-URL… etc). So, since we are not allowed nesting another form inside the <form runat=”server”> … </form>, which is required because other controls in my ASPX code work on server-side, I thought to inject the HTML and create FORM with method=”POST”. After making some proof of concept and testing some scenarios, I’ve concluded that I can do this very fast in two ways: Using jQuery to create form on fly with the needed parameters and make submit() Using HttpContext.Current.Response.Write to write the form on server-side (code-behind) and embed JavaScript code that will do the post Both ways seemed fine. 1. Using jQuery to create FORM html code and Submit it. Let’s say we have ‘PAY NOW’ button in our ASPX code: <asp:Button ID="btnPayNow" runat="server" Text="Pay Now" /> Now, if we want to make this button submit a FORM using POST method to another website, the jQuery way should be as follows: <script src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.5.1.js" type="text/javascript"></script> <script type="text/javascript">     $(function () {         $("#btnPayNow").click(function (event) {             event.preventDefault();             //construct htmlForm string             var htmlForm = "<form id='myform' method='POST' action='http://www.microsoft.com'>" +                 "<input type='hidden' id='name' value='hajan' />" +             "</form>";             //Submit the form             $(htmlForm).appendTo("body").submit();         });     }); </script> Yes, as you see, the code fires on btnPayNow click. It removes the default button behavior, then creates htmlForm string. After that using jQuery we append the form to the body and submit it. Inside the form, you can see I have set the htttp://www.microsoft.com URL, so after clicking the button you should be automatically redirected to the Microsoft website (just for test, of course for Payment I’m using Operator's URL). 2. Using HttpContext.Current.Response.Write to write the form on server-side (code-behind) and embed JavaScript code that will do the post The C# code behind should be something like this: public void btnPayNow_Click(object sender, EventArgs e) {     string Url = "http://www.microsoft.com";     string formId = "myForm1";     StringBuilder htmlForm = new StringBuilder();     htmlForm.AppendLine("<html>");     htmlForm.AppendLine(String.Format("<body onload='document.forms[\"{0}\"].submit()'>",formId));     htmlForm.AppendLine(String.Format("<form id='{0}' method='POST' action='{1}'>", formId, Url));     htmlForm.AppendLine("<input type='hidden' id='name' value='hajan' />");     htmlForm.AppendLine("</form>");     htmlForm.AppendLine("</body>");     htmlForm.AppendLine("</html>");     HttpContext.Current.Response.Clear();     HttpContext.Current.Response.Write(htmlForm.ToString());     HttpContext.Current.Response.End();             } So, with this code we create htmlForm string using StringBuilder class and then just write the html to the page using HttpContext.Current.Response.Write. The interesting part here is that we submit the form using JavaScript code: document.forms["myForm1"].submit() This code runs on body load event, which means once the body is loaded the form is automatically submitted. Note: In order to test both solutions, create two applications on your web server and post the form from first to the second website, then get the values in the second website using Request.Form[“input-field-id”] I hope this was useful post for you. Regards, Hajan

    Read the article

  • Inside The Kindle Paperwhite’s Display [Video]

    - by Jason Fitzpatrick
    By far the most praised feature of the new Amazon Kindle Paperwhite ebook reader is the new display; this video takes you behind the scenes with the design team and highlights what exactly makes the evenly lit display work so well. Accounting for the promotional nature of the video, it’s still fascinating to take a look at how they crafted the front plate of the display to yield such an even front-lit effect. You can read more about the Kindle Paperwhite here. [via ExtremeTech] HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • MySQL Connector/Net 6.4.6 Maintenance Release has been released

    - by fernando
    MySQL Connector/Net 6.4.6, a new version of the all-managed .NET driver for MySQL has been released.  This is a maintenance release and is recommended for use in production environments. It is appropriate for use with MySQL server versions 5.0-5.6. This is intended to be the final release for Connector/NET 6.4. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.4.6 version of MySQL Connector/Net brings the following fixes: - Fix for List.Contains generates a bunch of ORs instead of more efficient IN clause in   LINQ to Entities (Oracle bug #14016344, MySql bug #64934). - Fix for error when trying to change the name of an Index on the Indexes/Keys editor; along with this fix now users can change the Index type of a new Index which could not be done   in previous versions, and when changing the Index name the change is reflected on the list view at the left side of the Index/Keys editor (Oracle bug #13613801). - Fix for stored procedure call using only its name with EF code first (MySql bug #64999, Oracle bug #14008699). - Fix for performance issue in generated EF query: .NET StartsWith/Contains/EndsWith produces MySql's locate instead of Like (MySql bug #64935, Oracle bug #14009363). - Fix for script generated for code first contains wrong alter table and wrong declaration for byte[] (MySql bug #64216, Oracle bug #13900091). - Fix for Exception thrown when using cascade delete in an EDM Model-First in Entity Framework (Oracle bug #14008752, MySql bug #64779). - Fix for Session locking issue with MySqlSessionStateStore (MySql bug #63997, Oracble bug #13733054). - Fixed deleting a user profile using Profile provider (MySQL bug #64409, Oracle bug #13790123). - Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202). This fix checks if the type has a FixedLength facet set in order to create a char otherwise should create varchar, mediumtext or longtext types when using a String CLR type in Code First or Model First also tested in Database First. Unit tests added for Code First and ProviderManifest. - Fix for bug "CacheServerProperties can cause 'Packet too large' error" (MySQL Bug #66578 Orabug #14593547). - Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549). - Fixed inheritance on Entity Framework Code First scenarios. Discriminator column is created using its correct type as varchar(128) (MySql bug #63920 and Oracle bug #13582335). - Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048). - Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292). - Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960) by changing   several internal declaration of lastinsertid from int to long. - Fixed "Decimal type should have digits at right of decimal point", now default is 2, but user's changes in   EDM designer are recognized (MySql bug #65127, Oracle bug #14474342). - Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715). - Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338). - Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204). - Small improvement on MySqlPoolManager CleanIdleConnections for better mysqlpoolmanager idlecleanuptimer at startup (MySql bug #66472 and Oracle bug #14652624). - Fix for bug TIMESTAMP values are mistakenly represented as DateTime with Kind = Local (Mysql bug #66964, Oracle bug #14740705). - Fix for bug Keyword not supported. Parameter name: AttachDbFilename (Mysql bug #66880, Oracle bug #14733472). - Added support to MySql script file to retrieve data when using "SHOW" statements. - Fix for Package Load Failure in Visual Studio 2005 (MySql bug #63073, Oracle bug #13491674). - Fix for bug "Unable to connect using IPv6 connections" (MySQL bug #67253, Oracle bug #14835718). - Added auto-generated values for Guid identity columns (MySql bug #67450, Oracle bug #15834176). - Fix for method FirstOrDefault not supported in some LINQ to Entities queries (MySql bug #67377, Oracle bug #15856964). The release is available to download at http://dev.mysql.com/downloads/connector/net/6.4.html Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Can't adjust screen brightness on Macbook Pro 10,1 Ubuntu 13.10

    - by ben101
    I recently installed Ubuntu on my retina Macbook Pro (following this great guide: http://cberner.com/2013/03/01/installing-ubuntu-13-04-on-macbook-pro-retina/) Everything works fine so far however the screen brightness / backlight cannot be adjusted neither by using the assigned key nor by any other means. I know, I'm not the first to address this problem, but all the suggested solutions I found so far did not work for me. I unsuccessfully tried the following: Including Option "RegistryDwords" "EnableBrightnessControl=1" in the Devices-Section of /etc/X11/xorg.conf the application xbacklight I also haven't found any file such as mbp_backlight or apple_backlight on my system which probably would be a starting point I'm using the Nvidia-driver. (BTW: With the nouveau-driver, the keys to adjust the brightness work. However, with the nouveau driver Ubuntu does not resume from suspend mode) Any suggestions what I can do? Thanks!

    Read the article

  • Xbox Live Traffic Light Tells You When It’s Game Time

    - by Jason Fitzpatrick
    Why log on to see if your friends are available for a game of Halo 3 when you can glance at this traffic-light-indicator to see if it’s go time? Courtesy of tinker and gamer AndrewF, this fun little hack combines a small traffic light, an Arduino board, and the Xbox live API to provide a real-time indicator of how many of your friends are online and gaming. When the light is red, nobody is available to play. Yellow and green indicate one and several of your friends are available. Hit up the link below to check out the parts list and project code. Xbox Live Traffic Lights [via Hack A Day] HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • What is the quickest way to indent a block of text with spaces for use within a web browser?

    - by ændrük
    I occasionally have the need to indent a block of text with spaces for use within a web browser, for example, when formatting a code block on this site or in a post on Launchpad. So far I've just done it by hand by copying four spaces to the clipboard and then mashing keys really fast: ?, Home, Ctrl+V (repeat) What is the quickest way to accomplish this? Copying and pasting to another program? (Which?) A Firefox or Chrome browser extension? A command to directly modify the clipboard contents? An auto-typing program?

    Read the article

  • keyboard shortcut editor does not intercept keypresses

    - by jpic
    I've been using suckless dwm for years and i really need to make the shortcuts look alike to feel at home ;) On ubuntu oneiric, the keyboard shortcut editor is opened with: system settings - keyboard - shortcuts. The help in the window specifies: 'To edit a shortcut, click the row and hold down the new keys or press backspace to clear' So I select the first row of the 'navigation' section and highlight 'Move window to workspace 1' Then i hold down ctrl+alt+1 for ten seconds but nothing happens. The shortcut still appears as 'disabled'. I'm unable to set any shortcut, i've tried many combinations. For example, a combination with Super key will be intercepted by unity instead of being catched by the keyboard shortcut editor window. Can anybody reproduce this with oneiric ? What am I doing wrong ?

    Read the article

  • How to hire a web-programmer : for non-programmer

    - by 0Complex
    I am a non-programmer that has used the services of : freelancer, odesk, etc I've tried asking for what i need but, I can't find anyone who can show me any type of example similar to what I request in the specs for the web-programming. They have front ends and back ends, but they don't fulfill true "live" website requirements. "live" as to be ready to support traffic, keys in hand, can be updated constantly by me, ... How do I figure how to evaluate a programmer ? How do I bid the appropriate price for the services ?

    Read the article

  • Team Leaders & Authors - Manage and Report Workflow using "Print an Outline" in UPK

    - by [email protected]
    Did you know you can "print an outline?" You can print any outline or portion of an outline. Why might you want to "print an outline" in UPK... Have you ever wondered how many topics you have recorded, how many of your topics are ready for review, or even better, how many topics are complete! Do you need to report your project status to management? Maybe you just like to have a copy of your outline to refer to during development. Included in this output is the outline structure as well as the layout defined in the Details View of the Outline Editor. To print an outline, you must open either a module or section in the Outline Editor. A set of default data columns is automatically included in the output; however, you can configure which columns you want to appear in the report by switching to the Details view and customizing the columns. (To learn more about customizing your columns refer to the Add and Remove Columns section of the Content Development.pdf guide) To print an outline from the Outline Editor: 1. Open a module or section document in the Outline Editor. 2. Expand the documents to display the details that you want included in the report. 3. On the File menu, choose Print and use the toolbar icons to print, view, or save the report to a file. Personally, I opt to save my outline in Microsoft Excel. Using the delivered features of Microsoft Excel you can add columns of information, such as development notes, to your outline or you can graph and chart your Project status. As mentioned above you can configure what columns you want to appear in the outline. When utilizing the Print an Outline feature in conjunction with the Managing Workflow features of the UPK Multi-user instance you as a Team Lead or Author can better report project status. Read more about Managing Workflow below. Managing Workflow: The Properties toolpane contains special properties that allow authors to track document status or State as well as assign Document Ownership. Assign Content State The State property is an editable property for communicating the status of a document. This is particularly helpful when collaborating with other authors in a development team. Authors can assign a state to documents from the master list defined by the administrator. The default list of States includes (blank), Not Started, Draft, In Review, and Final. Administrators can customize the list by adding, deleting or renaming the values. To assign a State value to a document: 1. Make sure you are working online. 2. Display the Properties toolpane. 3. Select the document(s) to which you want to assign a state. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the State cell. 5. Select a value from the list. Assign Document Ownership In many enterprises, multiple authors often work together developing content in a team environment. Team leaders typically handle large projects by assigning specific development responsibilities to authors. The Owner property allows team leaders and authors to assign documents to themselves and other authors to track who is responsible for a specific document. You view and change document assignments for a document using the Owner property in the Properties toolpane. To assign a document owner: 1. Make sure you are working online. 2. On the View menu, choose Properties. 3. Select the document(s) to which you want to assign document responsibility. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the Owner cell. 5. Select a name from the list. Is anyone out there already using this feature? Share your ideas with the group. Those of you new to this feature, give it a test drive and let us know what you think. - Kathryn Lustenberger, Oracle UPK & Tutor Outbound Product Management

    Read the article

  • What to send to server in real time FPS game?

    - by syloc
    What is the right way to tell the position of our local player to the server? Some documents say that it is better to send the inputs whenever they are produced. And some documents say the client sends its position in a fixed interval. With the sending the inputs approach: What should I do if the player is holding down the direction keys? It means I need to send a package to the server in every frame. Isn't it too much? And there is also the rotation of the player from the mouse input. Here is an example: http://www.gabrielgambetta.com/fpm_live.html What about sending the position in fixed interval approach. It sends too few messages to the server. But it also reduces responsiveness. So which way is better?

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >