Search Results

Search found 34016 results on 1361 pages for 'static content'.

Page 241/1361 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • How to add locale aware CSS to an individual component in ADF Faces.

    - by [email protected]
    When creating a skin in ADF Faces, it's (relatively) easy to add locale aware CSS using the :rtl psuedo-class on the end of a skinning key.  Example:af|inputListOfValues::content {  padding-left: 3px;}af|inputListOfValues::content:rtl {  padding-left: 0px;  padding-right: 3px;}In this example, we want some padding before the start of the text in the content element of the component.  In right to left locales, the start of the text is on the right side by default, so we need to change the padding from the left to the right.Let's say, however, that you want to specify a locale aware CSS style on an individual component using the contentStyle attribute.  There is a handy ADF Faces EL function to help you out here: isRTL.  For our example, let's say we want an inputText component whose text content is 'end' aligned.  If you weren't considering RTL locales, you could code this as:<af:inputText id="idInputTextRight" label="right aligned" value="Test"                    contentStyle="text-align: right;"/>This, however, will be right aligned regardless of locale. This is where isRTL() comes to the rescue.  This is how we would code this to be locale aware:<af:inputText id="idInputTextEnd" label="end aligned" value="Test"                     contentStyle="text-align: #{af:isRTL()?'left':'right'};"/> The af:isRTL() EL function returns true if we are rendering in RTL, so we can use it to pick the appropriate text alignment.

    Read the article

  • What's up with stat on Macos/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Macos 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputing the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • WebCenter Customer Advisory Board meetings kick off Oracle Open World 2012!

    - by Lance711
    Welcome to OpenWorld! OpenWorld 2012 got underway today with a series of meetings with the members of the WebCenter Customer Advisory Board. Led by the WebCenter Product Management team, these meetings are a great way for the product team and customers to directly interact and discuss real-life business challenges, product details and to discuss upcoming features and functionality. This year, board members participated in discussions around live demos around product enhancements that will be featured throughout the coming week. Highlights included a variety of new mobile and social solutions, a great new user interface for WebCenter Content plus new Portal and Sites functionality that makes the experience for the everyday user a lot more pleasant. The day kicked off with Roel Stalman, VP of Product Management, giving a detailed overview of what’s new in WebCenter. Given all the improvements to discuss, this session went over 2 hours! Roel showcased the brand new UI for Content, Portal and Sites. He also gave live demos of the new mobile apps for WebCenter Content, Portal and the Oracle Social Network.  The attendees then broke into sub-groups in order to deep-dive with Product Management for the Portal, Sites, and Content product areas on specific functionality and application integrations. If you are here in San Francisco this week for OpenWorld, I definitely recommend stopping by the WebCenter area in the Moscone West Exhibition Hall to see some of this new functionality for yourself. And be sure to check out the WebCenter sessions throughout the week as those give us a chance to discuss direction and strategy, answer your questions and get your feedback and ideas. For those of you could not make it to OpenWorld this year, we miss you! You can stay in touch with what is happening via this blog and by following #oow and #webcenter on Twitter. Additionally, we will be rolling out details on upcoming products and release info over the coming months via this blog and web seminars. Stay tuned!

    Read the article

  • Oracle’s Java Community Outreach Plan

    - by Tori Wieldt
    As the steward of Java, Oracle recognizes the importance and value of the Java community, and the relevant role it plays in keeping Java the largest, most vibrant developer community in the world.   In order to increase Oracle’s touch with Java developers worldwide, we are shifting our focus from a flagship JavaOne event followed by several regional JavaOne conferences, to a new outreach model which continues with the JavaOne flagship event, as well as a mix of online content, regional Java Tours, and regional 3rd party event participation.  1. JavaOne JavaOne continues to remain the premier hub for Java developers where you are given the opportunity to improve your Java technical skills, and interact with other members of the Java community. JavaOne is centered on open collaboration and sharing, and Oracle will continue to invest in JavaOne as a unique stand-alone event for the Java community. Oracle recognizes that many developers cannot attend JavaOne in person, therefore Oracle will share the wealth of the unique event material to those developers through a new and easy-to-access online Java program. While online JavaOne content cannot address the importance of actual face-to-face community/developer engagements and networking, online content does aide in extending the Java technical learning opportunity to a broader collection of developers. 2. Java Developer Day Tours Oracle will execute regional Java Developer Days with recognized Java User Groups (JUGs) with participation from Java Evangelist and Java Champions. This allows local, regional specific Java topics to be addressed both by Oracle and the Java community. In addition, Oracle will deliver more virtual technical content programs to reach developers where an existing JUG may not have a presence. 3. Sponsorship of Community-Driven Regional Events/Conferences Oracle also recognizes that improved community dialog and relations are achievable by continued Oracle sponsorship and onsite participation at both established/well-recognized 3rd party events and new emerging/growing 3rd party events. Oracle’s ultimate goal is to be an even better steward for Java by reaching more of the Java ecosystem with face-to-face and online community engagements. We look forward to planning tours and events with you, members of the Java community.

    Read the article

  • Odd squid transparent redirect behavior

    - by EMiller
    This is the first time I've set up squid. It's running a redirect script that does some text search/replace on html pages, and then saves them to a location on the same machine on the nginx path - then issues the redirect to that URL (it's an art project :D). The relevant lines in squid.conf are http_port 3128 transparent redirect_program /etc/squid/jefferson_redirect.py The jefferson_redirect.py script is based on this script: http://gofedora.com/how-to-write-custom-redirector-rewritor-plugin-squid-python/ The issue: I'm getting strange http redirect behavior. For example, here is the normal request/response from a PHP script that issues a header("Location:"); - a 302 redirect: http://redirector.mysite.com/?unicmd=g+yreka GET /?unicmd=g+yreka HTTP/1.1 Host: redirector.mysite.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-1.fc12 Firefox/3.5.9 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive HTTP/1.1 302 Found Date: Tue, 13 Apr 2010 05:15:43 GMT Server: Apache X-Powered-By: PHP/5.2.11 Location: http://www.google.com/search?q=yreka Content-Type: text/html Vary: User-Agent,Accept-Encoding Content-Encoding: gzip Content-Length: 2108 Keep-Alive: timeout=3, max=100 Connection: Keep-Alive Here's what it looks like when running through the squid proxy (note that "redirector.mysite.com" is not the site running squid or nginx): http://redirector.mysite.com/?unicmd=g+yreka GET /?unicmd=g+yreka HTTP/1.1 Host: redirector.mysite.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-1.fc12 Firefox/3.5.9 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Proxy-Connection: keep-alive If-Modified-Since: Tue, 13 Apr 2010 05:21:02 GMT HTTP/1.0 200 OK Server: nginx/0.7.62 Date: Tue, 13 Apr 2010 05:21:10 GMT Content-Type: text/html Content-Length: 17865 Last-Modified: Tue, 13 Apr 2010 05:21:10 GMT Accept-Ranges: bytes X-Cache: MISS from jefferson X-Cache-Lookup: HIT from jefferson:3128 Via: 1.1 jefferson:3128 (squid/2.7.STABLE6) Connection: keep-alive Proxy-Connection: keep-alive It is basically working - but the URL http://redirector.mysite.com/?unicmd=g+yreka remains unchanged, while displaying the google page (mostly broken as it's using URLs relative to redirector.mysite.com) I've experienced a similar thing with google results pages: when clicking to another page from google, I get a google URL, with the other site's content. Sorry for the long post - many thanks if you've read this far! Any ideas?

    Read the article

  • Adding a CMS to an existing Magento shop

    - by user6341
    I am working on a project for 3 niche stores built on magento (using magento's multi-store function) that each get roughly 50k unique visitors a day. The sites don't currently have a blog or forum or any social networking aspects. Would like to add a cms to each site that can be centrally run and would like it to take over the front end content from Magento. Also would like it to maintain an online blog/publication of sorts with videos, articles, and the like with privileges to edit the content given to a dozen or so people with different privileges. Want to add a forum to each site that is fairly robust and to possibly add some social networking aspects down the road, so extandability and available plugins/mods in each cms is important. Other than shared login between the forums,blog/publication and store, would like to be able to integrate some content from the forums and blog/publication into the store as well. After researching this a bit, I am inclined towards Drupal, but I haven't found any modules to integrate it with Magento. Also, since the blog content will be done by about a dozen nontechnical people, I want something that is very easy to work with. Lastly, since the site gets a good amount of traffic, speed and security are very important. What CMS would you recommend integrating in this context? Deciding between Drupal, Wordpress and ModX. Also Plone as well. Thanks.

    Read the article

  • Getting NFS clients to retry mount if NFS server down when client boots

    - by z0mbix
    I have an NFS server that several clients mount. I am using the following in my /etc/exports on the server: /content *(rw,no_root_squash) and on the clients in my /etc/fstab I have: content.prd.domain.tld:/content /content nfs rw,hard,intr 0 0 If the clients boot while the NFS server is down, the share does not get mounted. I read in the NFS man page that the retry defaults should handle this: retry=n The number of minutes to retry an NFS mount operation in the foreground or background before giving up. The default value for forground mounts is 2 minutes. The default value for background mounts is 10000 minutes, which is roughly one week. I have tested this, but it doesn't appear to work. Am I missing something? All servers are RHEL 5.4. Cheers z0mbix

    Read the article

  • vSphere ESX 5.5 hosts cannot connect to NFS Server

    - by Gerald
    Summary: My problem is I cannot use the QNAP NFS Server as an NFS datastore from my ESX hosts despite the hosts being able to ping it. I'm utilising a vDS with LACP uplinks for all my network traffic (including NFS) and a subnet for each vmkernel adapter. Setup: I'm evaluating vSphere and I've got two vSphere ESX 5.5 hosts (node1 and node2) and each one has 4x NICs. I've teamed them all up using LACP/802.3ad with my switch and then created a distributed switch between the two hosts with each host's LAG as the uplink. All my networking is going through the distributed switch, ideally, I want to take advantage of DRS and the redundancy. I have a domain controller VM ("Central") and vCenter VM ("vCenter") running on node1 (using node1's local datastore) with both hosts attached to the vCenter instance. Both hosts are in a vCenter datacenter and a cluster with HA and DRS currently disabled. I have a QNAP TS-669 Pro (Version 4.0.3) (TS-x69 series is on VMware Storage HCL) which I want to use as the NFS server for my NFS datastore, it has 2x NICs teamed together using 802.3ad with my switch. vmkernel.log: The error from the host's vmkernel.log is not very useful: NFS: 157: Command: (mount) Server: (10.1.2.100) IP: (10.1.2.100) Path: (/VM) Label (datastoreNAS) Options: (None) cpu9:67402)StorageApdHandler: 698: APD Handle 509bc29f-13556457 Created with lock[StorageApd0x411121] cpu10:67402)StorageApdHandler: 745: Freeing APD Handle [509bc29f-13556457] cpu10:67402)StorageApdHandler: 808: APD Handle freed! cpu10:67402)NFS: 168: NFS mount 10.1.2.100:/VM failed: Unable to connect to NFS server. Network Setup: Here is my distributed switch setup (JPG). Here are my networks. 10.1.1.0/24 VM Management (VLAN 11) 10.1.2.0/24 Storage Network (NFS, VLAN 12) 10.1.3.0/24 VM vMotion (VLAN 13) 10.1.4.0/24 VM Fault Tolerance (VLAN 14) 10.2.0.0/24 VM's Network (VLAN 20) vSphere addresses 10.1.1.1 node1 Management 10.1.1.2 node2 Management 10.1.2.1 node1 vmkernel (For NFS) 10.1.2.2 node2 vmkernel (For NFS) etc. Other addresses 10.1.2.100 QNAP TS-669 (NFS Server) 10.2.0.1 Domain Controller (VM on node1) 10.2.0.2 vCenter (VM on node1) I'm using a Cisco SRW2024P Layer-2 switch (Jumboframes enabled) with the following setup: LACP LAG1 for node1 (Ports 1 through 4) setup as VLAN trunk for VLANs 11-14,20 LACP LAG2 for my router (Ports 5 through 8) setup as VLAN trunk for VLANs 11-14,20 LACP LAG3 for node2 (Ports 9 through 12) setup as VLAN trunk for VLANs 11-14,20 LACP LAG4 for the QNAP (Ports 23 and 24) setup to accept untagged traffic into VLAN 12 Each subnet is routable to another, although, connections to the NFS server from vmk1 shouldn't need it. All other traffic (vSphere Web Client, RDP etc.) goes through this setup fine. I tested the QNAP NFS server beforehand using ESX host VMs atop of a VMware Workstation setup with a dedicated physical NIC and it had no problems. The ACL on the NFS Server share is permissive and allows all subnet ranges full access to the share. I can ping the QNAP from node1 vmk1, the adapter that should be used to NFS: ~ # vmkping -I vmk1 10.1.2.100 PING 10.1.2.100 (10.1.2.100): 56 data bytes 64 bytes from 10.1.2.100: icmp_seq=0 ttl=64 time=0.371 ms 64 bytes from 10.1.2.100: icmp_seq=1 ttl=64 time=0.161 ms 64 bytes from 10.1.2.100: icmp_seq=2 ttl=64 time=0.241 ms Netcat does not throw an error: ~ # nc -z 10.1.2.100 2049 Connection to 10.1.2.100 2049 port [tcp/nfs] succeeded! The routing table of node1: ~ # esxcfg-route -l VMkernel Routes: Network Netmask Gateway Interface 10.1.1.0 255.255.255.0 Local Subnet vmk0 10.1.2.0 255.255.255.0 Local Subnet vmk1 10.1.3.0 255.255.255.0 Local Subnet vmk2 10.1.4.0 255.255.255.0 Local Subnet vmk3 default 0.0.0.0 10.1.1.254 vmk0 VM Kernel NIC info ~ # esxcfg-vmknic -l Interface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type vmk0 133 IPv4 10.1.1.1 255.255.255.0 10.1.1.255 00:50:56:66:8e:5f 1500 65535 true STATIC vmk0 133 IPv6 fe80::250:56ff:fe66:8e5f 64 00:50:56:66:8e:5f 1500 65535 true STATIC, PREFERRED vmk1 164 IPv4 10.1.2.1 255.255.255.0 10.1.2.255 00:50:56:68:f5:1f 1500 65535 true STATIC vmk1 164 IPv6 fe80::250:56ff:fe68:f51f 64 00:50:56:68:f5:1f 1500 65535 true STATIC, PREFERRED vmk2 196 IPv4 10.1.3.1 255.255.255.0 10.1.3.255 00:50:56:66:18:95 1500 65535 true STATIC vmk2 196 IPv6 fe80::250:56ff:fe66:1895 64 00:50:56:66:18:95 1500 65535 true STATIC, PREFERRED vmk3 228 IPv4 10.1.4.1 255.255.255.0 10.1.4.255 00:50:56:72:e6:ca 1500 65535 true STATIC vmk3 228 IPv6 fe80::250:56ff:fe72:e6ca 64 00:50:56:72:e6:ca 1500 65535 true STATIC, PREFERRED Things I've tried/checked: I'm not using DNS names to connect to the NFS server. Checked MTU. Set to 9000 for vmk1, dvSwitch and Cisco switch and QNAP. Moved QNAP onto VLAN 11 (VM Management, vmk0) and gave it an appropriate address, still had same issue. Changed back afterwards of course. Tried initiating the connection of NAS datastore from vSphere Client (Connected to vCenter or directly to host), vSphere Web Client and the host's ESX Shell. All resulted in the same problem. Tried a path name of "VM", "/VM" and "/share/VM" despite not even having a connection to server. I plugged in a linux system (10.1.2.123) into a switch port configured for VLAN 12 and tried mounting the NFS share 10.1.2.100:/VM, it worked successfully and I had read-write access to it I tried disabling the firewall on the ESX host esxcli network firewall set --enabled false I'm out of ideas on what to try next. The things I'm doing differently from my VMware Workstation setup is the use of LACP with a physical switch and a virtual distributed switch between the two hosts. I'm guessing the vDS is probably the source of my troubles but I don't know how to fix this problem without eliminating it.

    Read the article

  • Merging multiple top-level domains into a single domain

    - by user23089
    My client had multiple top-level-domains. Each one represented an insurance program within a specific vertical. For all the sites at these alternate domains, there was a 30/70 mix of duplicate vs. original content. Some of the alternate domains ranked very well for their target keyphrase groups, where others were absent in results pages. We advised the client to merge multiple domains into their existing main domain, for usability and SEO reasons. We recently ran the merger. Here was our process: On the main domain, transfer the content such that it matches 1-for-1 content on the various alternate domains Setup Google Webmaster Tools on the main domain Push the new content on the main domain live and submit a corresponding sitemap to Google Establish 301 redirects on the alternate domains, such that each alternate domain URL points to its respective page on the main domain We did this 12 days ago, and pages (previously on the alternate domains) that had ranked well on Google have now plummeted or are entirely non-existent. Did we do the right thing by merging multiple top-level domains into a single domain? Is this initial dip in rankings normal? How soon should we expect to see it return to its normal rankings?

    Read the article

  • Getting NFS clients to retry mount if NFS server down when client boots

    - by z0mbix
    I have an NFS server that several clients mount. I am using the following in my /etc/exports on the server: /content *(rw,no_root_squash) and on the clients in my /etc/fstab I have: content.prd.domain.tld:/content /content nfs rw,hard,intr 0 0 If the clients boot while the NFS server is down, the share does not get mounted. I read in the NFS man page that the retry defaults should handle this: retry=n The number of minutes to retry an NFS mount operation in the foreground or background before giving up. The default value for forground mounts is 2 minutes. The default value for background mounts is 10000 minutes, which is roughly one week. I have tested this, but it doesn't appear to work. Am I missing something? All servers are RHEL 5.4. Cheers z0mbix

    Read the article

  • mod_deflate Supported Encodings for Compression

    - by sparc
    It seems to me, that mod_deflate in Apache 2.2 will always return: Content-Encoding: gzip and never: Content-Encoding: deflate It was explained to me, that although there may be a deflate algorithm, mod_deflate is named after a file-format, in which the algorithm could be any of: gzip, bzip. pkzip Of those three, mod_deflate provides gzip. It seems as though gzip is the most popular and widely-supported algorithm in web browsers, but I know some web servers and proxies do return Content-Encoding: deflate. Aside from the confusion of the module's name, it true that mod_deflate will only return Content-Encoding: gzip? Thank you.

    Read the article

  • Adding a CMS to an existing Magento shop

    - by user6341
    I am working on a project for 3 niche stores built on magento (using magento's multi-store function) that each get roughly 50k unique visitors a day. The sites don't currently have a blog or forum or any social networking aspects. Would like to add a cms to each site that can be centrally run and would like it to take over the front end content from Magento. Also would like it to maintain an online blog/publication of sorts with videos, articles, and the like with privileges to edit the content given to a dozen or so people with different privileges. Want to add a forum to each site that is fairly robust and to possibly add some social networking aspects down the road, so extandability and available plugins/mods in each cms is important. Other than shared login between the forums,blog/publication and store, would like to be able to integrate some content from the forums and blog/publication into the store as well. After researching this a bit, I am inclined towards Drupal, but I haven't found any modules to integrate it with Magento. Also, since the blog content will be done by about a dozen nontechnical people, I want something that is very easy to work with. Lastly, since the site gets a good amount of traffic, speed and security are very important. What CMS would you recommend integrating in this context? Deciding between Drupal, Wordpress and Plone. Thanks.

    Read the article

  • Is it worthwhile to block malicious crawlers via iptables?

    - by EarthMind
    I periodically check my server logs and I notice a lot of crawlers search for the location of phpmyadmin, zencart, roundcube, administrator sections and other sensitive data. Then there are also crawlers under the name "Morfeus Fucking Scanner" or "Morfeus Strikes Again" searching for vulnerabilities in my PHP scripts and crawlers that perform strange (XSS?) GET requests such as: GET /static/)self.html(selector?jQuery( GET /static/]||!jQuery.support.htmlSerialize&&[1, GET /static/);display=elem.css( GET /static/.*. GET /static/);jQuery.removeData(elem, Until now I've always been storing these IPs manually to block them using iptables. But as these requests are only performed a maximum number of times from the same IP, I'm having my doubts if it does provide any advantage security related by blocking them. I'd like to know if it does anyone any good to block these crawlers in the firewall, and if so if there's a (not too complex) way of doing this automatically. And if it's wasted effort, maybe because these requests come from from new IPs after a while, if anyone can elaborate on this and maybe provide suggestion for more efficient ways of denying/restricting malicious crawler access. FYI: I'm also already blocking w00tw00t.at.ISC.SANS.DFind:) crawls using these instructions: http://spamcleaner.org/en/misc/w00tw00t.html

    Read the article

  • Why do some bad websites rank well?

    - by BradB
    Consider the following scenario: you are pitching SEO/Website Optimisation to a prospective client and you explain to them the importance of great copy and content, how acquiring links (ethically) can increase page rank, why the quality of the HTML build matters (H1, H2 tags, w3c validation etc), why keyword research is beneficial, you may drop in a few Google Webmaster Guideline or Matt Cutts references to back up your claims and rubbish the "back hat" approach as being no longer effective for good measure. Your advice is ethical and in the eyes of best practices, spot on. Then, the client points out to you some of their long established competitors on Google and you see these competitor websites ranking in the top spots (1 to 3) for medium to highly competitive search phrases that your client wants to compete for. These websites totally contradict your ethical approach and pretty much violate every best practice previously noted. They even out perform other "white hat" competitors who are in accordance with the above guidelines. I experienced this today. One of these well ranking websites had: About six microsites with more or less the same copy and a slightly varied layout Little or not textual content I would almost say duplicate content across the sites, but there was so little of it it could barely qualify for being duplicate All the content in Flash (with a music track that kicked in on each page load, not so much of an SEO issue - but it helps paint the picture) Keyword stuffing behind the Flash file with a bunch of black text on black background in the style of keyword 1 keyword 2,keyword1,keyword 2,keyword 2 keyword 3 and so on... The exact keyword stuffed combination present on every page of the website A bunch of clearly self made links from poor quality forums and directories with little or no Page Rank Links exchanged across the microsites How do you explain your way out of this when this hard evidence is sat in front of you undermining your great pitch?

    Read the article

  • RPi and Java Embedded GPIO: Using Java to read input

    - by hinkmond
    Now that we've learned about using Java code to control the output of the Raspberry Pi GPIO ports (by lighting up LEDs from a Java app on the RPi for now and noting in the future the same Java code can be used to drive industrial automation or medical equipment, etc.), let's move on to learn about reading input from the RPi GPIO using Java code. As before, we need to start out with the necessary hardware. For this exercise we will connect a Static Electricity Detector to the RPi GPIO port and read the value of that sensor using Java code. The circuit we'll use is from William J. Beaty and is described at this Web link. See: Static Electricity Detector He calls it an "Electric Charge" detector, which is a bit misleading. A Field Effect Transistor is subject to nearby electro-magnetic fields, such as a static charge on a nearby object, not really an electric charge. So, this sensor will detect static electricity (or ghosts if you are into paranormal activity ). Take a look at the circuit and in the next blog posts we'll step through how to connect it to the GPIO port of your RPi and then how to write Java code to access this fun sensor. Hinkmond

    Read the article

  • Could not log-in properly but shows no error in joomla

    - by saeha
    This is what I did, I added variables in \libraries\joomla\database\table\user.php: var $img_content= null; //contains the blob type data var $img_name = null; var $img_type = null; then I added this code in \components\com_user\controller.php: $file = JRequest::getVar( 'pic', '', 'files', 'array' ); if(isset($file['name'])) { jimport('joomla.filesystem.file'); $fileName = $file['name']; $tmpName = $file['tmp_name']; $fileSize = $file['size']; $fileType = $file['type']; $fp = fopen($tmpName, 'r'); $content = fread($fp, filesize($tmpName)); //$content = addslashes($content); fclose($fp); $user->set('img_name', $fileName); $user->set('img_type', $fileType); $user->set('img_content', $content); } that works fine, but I found this problem in logging in with the new user with an uploaded photo, other user with an empty img_content field could login properly. What happens is when I log-in using the user with uploaded photo, it's not redirecting properly it just return to log-in, but when i log-in through backend using other user which is super admin, i could see that user which appears as logged in. I started saving the images in the database because I am having problem with the directory when I have uploaded the site. I think the log-in was affected by the blob type data in the database. Could that be the problem? What could be the solution? -saeha

    Read the article

  • Wget save cookies not working

    - by TrymBeast
    I've been trying to login in the pyload through the web api, but wget is not saving the cookies and I don't understand why. I'm using the following command: wget --delete-after --keep-session-cookies --save-cookies=my_cookies.txt --post-data="username=USERNAME&password=PASSWORD" http://localhost:8000/api/login But the content of my_cookies.txt is: # HTTP cookie file. # Generated by Wget on 2012-06-23 22:31:33. # Edit at your own risk. When I run the same command but in debug mode I get the following output that includes the set cookie in the header response: DEBUG output created by Wget 1.10.2 (Red Hat modified) on linux-gnueabi. --22:31:11-- http://localhost:8000/api/login Resolving localhost... 127.0.0.1 Caching localhost => 127.0.0.1 Connecting to localhost|127.0.0.1|:8000... connected. Created socket 3. Releasing 0x000504d0 (new refcount 1). ---request begin--- POST /api/login HTTP/1.0 User-Agent: Wget/1.10.2 (Red Hat modified) Accept: */* Host: localhost:8000 Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 32 ---request end--- [POST data: username=USERNAME&password=PASSWORD] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Content-Length: 34 Content-Type: application/json Cache-Control: no-cache, must-revalidate Set-cookie: beaker.session.id=405390ddc809efed54820638c95d7997; expires=Tue, 19-Jan-2038 04:14:07 GMT; Path=/ Connection: Keep-Alive Date: Sat, 23 Jun 2012 21:31:11 GMT Server: CherryPy/3.1.2 WSGI Server ---response end--- 200 OK hs->local_file is: login (not existing) Registered socket 3 for persistent reuse. TEXTHTML is on. Length: 34 [application/json] Saving to: `login' 100%[=======================================>] 34 --.-K/s in 0s 22:31:11 (1.28 MB/s) - `login' saved [34/34] Removing file due to --delete-after in main(): Removing login. Saving cookies to my_cookies.txt. Done saving cookies. Can anyone tell me what am I doing wrong? Thanks in advance!

    Read the article

  • How To Backup Of MySQL Database Using PhpMyAdmin

    - by Jyoti
    It is very important to do backup of your MySql database, you will probably realize it when it is too late. A lot of web applications use MySql for storing the content. This can be blogs, and a lot of other things. When you have all your content as html files on your web server it is very easy to keep them safe from crashes, you just have a copy of them on your own PC and then upload them again after the web server is restored after the crash. All the content in the MySql database must also be backed up. If you have spent a lot of time making the content and it is only stored in the Mysql server, you will feel very bad if it gets lost for ever. Backing it up once every month or so makes sure you never loose too much of your work in case of a server crash, and it will make you sleep better at night. It is easy and fast, so there is no reason for not doing it. Step 1: Log into phpMyAdmin on your server. Step2: You can select the database that you would like to backup from the drop-down menu called Database. Step 3: A new page will be loaded in phpMyAdmin showing the selected database. In order to proceed with the backup click on the Export tab. Step 4: The options that you should select apart from the default ones are Save as file which will save the file locally to your computer in an .sql format and Add DROP TABLE which will add the drop table functionality if the table already exists in the database backup as shown below. Step 5: Click on the Go button to start the export/backup procedure for your database. A download window will pop up prompting for the exact place where you would like to save the file on your local computer. It is possible that the download starts automatically. This depends on your browser’s settings.

    Read the article

  • Rewriting html links with modproxyperlhtml

    - by Juancho
    I'm trying to setup an Apache reverse proxy using mod_proxy and modproxyperlhtml. This is my scenario: Domain for the proxy: http : // www.myserver.com/ Destination server (the one behind the proxy): http : // myserver.foo.com/myapp/ I'm sorry that I have to space the URL but serverfault doesn't allow me to post more than two links as "spam protection mechanism" (ridiculous on a site where you ask questions about servers and it's really probable to post more than two times the same URL's to explain your question). The idea is to map http : // www.myserver.com/ to http : // myserver.foo.com/myapp/ . Note that the path on the proxy is / and on the destination server is /myapp/. All of the examples I can find on the net (like the one on the official documentation of modproxyperlhtml) are the other way around, ie. path on the proxy /myapp/ and path on the destination server /. This is my current config that doesn't work: ProxyPass / http : // myserver.foo.com/myapp/ ProxyPassReverse / http : // myserver.foo.com/myapp/ PerlInputFilterHandler Apache2::ModProxyPerlHtml PerlOutputFilterHandler Apache2::ModProxyPerlHtml SetHandler perl-script PerlSetVar ProxyHTMLVerbose "On" LogLevel Info <Location / > # ProxyPassReverse /myapp/ PerlAddVar ProxyHTMLURLMap "/myapp/ /" PerlAddVar ProxyHTMLURLMap "http : // myserver.foo.com /" </Location> The examples use the ProxyPassReverse inside the Location directive, but on my case doesn't work, only when outside. With this configuration the links aren't being replaced as they should be, my guess is that the location isn't being found, thus the rewrite rules aren't being applied. The error log only shows that it uncompresses the content, searches it but doesn't find anything: [Tue Nov 13 0842:05 2012] [warn] [ModProxyPerlHtml] Uncompressing text/html; charset=UTF-8, Content-Encoding: gzip\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Compressing output as Content-Encoding: gzip\n [Tue Nov 13 08:42:06 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n What could be wrong ?

    Read the article

  • Using wget to download pdf files from a site that requires cookies to be set

    - by matt74tm
    I want to access a newspaper site and then download their epaper copies (in PDF). The site requires me to login using my email address and password and then it permits me to access those PDF URLs. I'm having trouble 'setting my session' in wget. When I login into the site from my browser, it sets two cookie values: [email protected] Password=12345 I tried: wget --post-data "[email protected]&Password=12345" http://epaper.abc.com/login.aspx However, that just downloaded the login page and saved it locally The FORM on the login page has two fields: txtUserID txtPassword and radiobuttons like this: <input id="rbtnManchester" type="radio" checked="checked" name="txtpub" value="44"> Another button: <input id="rbtnLondon" type="radio" name="txtpub" value="64"> If I post this to the login.aspx page, I get the same output wget --post-data "[email protected]&txtPassword=12345&txtpub=44" http://epaper.abc.com/login.aspx If I do: --save-cookies abc_cookies.txt it doesnt seem to have anything other than the default content. For the last if I do --debug as well it says: ... Set-Cookie: ASP.NET_SessionId=05kphcn4hjmblq45qgnjoe41; path=/; HttpOnly ... Stored cookie epaper.abc.com -1 (ANY) / <session> <insecure> [expiry none] ASP.NET_SessionId 05kphcn4hjmblq45qgnjoe41 Length: 107253 (105K) [text/html] Saving to: `login.aspx' ... Saving cookies to abc_cookies.txt. However, abc_cookies.txt shows ONLY the following: # HTTP cookie file. # Generated by Wget on 2011-08-16 08:03:05. # Edit at your own risk. (Not sure why I'm not getting any responses on SO - perhaps SU is a better forum - http://stackoverflow.com/questions/7064171/using-wget-to-download-pdf-files-from-a-site-that-requires-cookies-to-be-set) EDIT 1 C:\Temp>wget --cookies=on --keep-session-cookies --save-cookies abc_cookies.txt --post-data "txtUserID=abc%40gmail.com&txtPassword=password&txtpub=44&chkbox=checkbox&submit.x=48&submit.y=7" http://epaper.abc.com/login.aspx --debug SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc DEBUG output created by Wget 1.11.4 on Windows-MinGW. --2011-08-18 08:15:59-- http://epaper.abc.com/login.aspx Resolving epaper.abc.com... seconds 0.00, 999.999.99.99 Caching epaper.abc.com => 999.999.99.99 Connecting to epaper.abc.com|999.999.99.99|:80... seconds 0.00, connected. Created socket 300. Releasing 0x00a2ae80 (new refcount 1). ---request begin--- POST /login.aspx HTTP/1.0 User-Agent: Wget/1.11.4 Accept: */* Host: epaper.abc.com Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 100 ---request end--- [POST data: txtUserID=abc%40gmail.com&txtPassword=password&txtpub=44&chkbox=checkbox&submit.x=48&submit.y=7] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Connection: keep-alive Date: Thu, 18 Aug 2011 02:46:17 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 Set-Cookie: ASP.NET_SessionId=owcrje55yl45kgmhn43gq145; path=/; HttpOnly Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Length: 107253 ---response end--- 200 OK Registered socket 300 for persistent reuse. Stored cookie epaper.abc.com -1 (ANY) / <session> <insecure> [expiry none] ASP.NET_SessionId owcrje55yl45kgmhn43gq145 Length: 107253 (105K) [text/html] Saving to: `login.aspx.1' 100%[======================================================================================================================>] 107,253 24.9K/s in 4.2s 2011-08-18 08:16:05 (24.9 KB/s) - `login.aspx.1' saved [107253/107253] Saving cookies to abc_cookies.txt. Done saving cookies. C:\Temp>wget --referer=http://epaper.abc.com/login.aspx --cookies=on --load-cookies abc_cookies.txt --keep-session-cookies --save-cookies abc_cookies.txt http://epaper.abc.com/PagePrint/16_08_2011_001.pdf --debug SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc DEBUG output created by Wget 1.11.4 on Windows-MinGW. Stored cookie epaper.abc.com -1 (ANY) / <session> <insecure> [expiry none] ASP.NET_SessionId owcrje55yl45kgmhn43gq145 --2011-08-18 08:16:12-- http://epaper.abc.com/PagePrint/16_08_2011_001.pdf Resolving epaper.abc.com... seconds 0.00, 999.999.99.99 Caching epaper.abc.com => 999.999.99.99 Connecting to epaper.abc.com|999.999.99.99|:80... seconds 0.00, connected. Created socket 300. Releasing 0x00598290 (new refcount 1). ---request begin--- GET /PagePrint/16_08_2011_001.pdf HTTP/1.0 Referer: http://epaper.abc.com/login.aspx User-Agent: Wget/1.11.4 Accept: */* Host: epaper.abc.com Connection: Keep-Alive Cookie: ASP.NET_SessionId=owcrje55yl45kgmhn43gq145 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Connection: keep-alive Date: Thu, 18 Aug 2011 02:46:30 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 content-disposition: attachement; filename=Default_logo.gif Cache-Control: private Content-Type: image/GIF Content-Length: 4568 ---response end--- 200 OK Registered socket 300 for persistent reuse. Length: 4568 (4.5K) [image/GIF] Saving to: `16_08_2011_001.pdf' 100%[======================================================================================================================>] 4,568 7.74K/s in 0.6s 2011-08-18 08:16:14 (7.74 KB/s) - `16_08_2011_001.pdf' saved [4568/4568] Saving cookies to abc_cookies.txt. Done saving cookies. Contents of abc_cookies.txt epaper.abc.com FALSE / FALSE 0 ASP.NET_SessionId owcrje55yl45kgmhn43gq145

    Read the article

  • Redirecting to a dynamic page

    - by binarydev
    I have a page displaying blog posts (latest_posts.php) and another page that display single blog posts (blog.php) . I intend to link the image title in latest_posts.php so that it redirects to blog.php where it would display the particular post that was clicked. latest_posts.php: <!-- Header --> <h2 class="underline"> <span>What&#039;s new</span> <span></span> </h2> <!-- /Header --> <!-- Posts list --> <ul class="post-list post-list-1"> <?php /* Fetches Date/Time, Post Content and title */ include 'dbconnect.php'; $sql = "SELECT * FROM wp_posts"; $res = mysql_query($sql); while ( $row = mysql_fetch_array($res) ) { ?> <!-- Post #1 --> <li class="clear-fix"> <!-- Date --> <div class="post-list-date"> <div class="post-date-box"> <?php //Timestamp broken down to show accordingly $timestamp = $row['post_date']; $datetime = new DateTime($timestamp); $date = $datetime->format("d"); $month = $datetime->format("M"); ?> <h3> <?php echo $date; ?> </h3> <span> <?php echo $month; ?> </span> </div> </div> <!-- /Date --> <!-- Image + comments count --> <div class="post-list-image"> <!-- Image --> <div class="image image-overlay-url image-fancybox-url"> <a href="post.php" class="preloader-image"> <?php echo '<img src="', $row['image'], '" alt="' , $row['post_title'] , '\'s Blog Image" />'; ?> </a> </div> <!-- /Image --> </div> <!-- /Image + comments count --> <!-- Content --> <div class="post-list-content"> <div> <!-- Header --> <h4> <a href="post.php? . $row['ID'] . "> <?php echo $row['post_title']; ?> </a> </h4> <!-- /Header --> <!-- Excerpt --> <p> <?php echo $row ['post_content']; }?> </p> <!-- /Excerpt --> </div> </div> <!-- /Content --> </li> <!-- /Post #1 --> </ul> <!-- /Posts list --> <a href="blog.php" class="button-browse">Browse All Posts</a> </div> <?php require_once('include/twitter_user_timeline.php'); ?> blog.php: <?php require_once('include/header.php'); ?> <body class="blog"> <?php require_once('include/navigation_bar_blog.php'); ?> <div class="blog"> <div class="main"> <!-- Header --> <h2 class="underline"> <span>What&#039;s new</span> <span></span> </h2> <!-- /Header --> <!-- Layout 66x33 --> <div class="layout-p-66x33 clear-fix"> <!-- Left column --> <!-- <div class="column-left"> --> <!-- Posts list --> <ul class="post-list post-list-2"> <?php /* Fetches Date/Time, Post Content and title with Pagination */ include 'dbconnect.php'; //sets to default page if(empty($_GET['pn'])){ $page=1; } else { $page = $_GET['pn']; } // Index of the page $index = ($page-1)*3; $sql = "SELECT * FROM `wp_posts` ORDER BY `post_date` DESC LIMIT " . $index . " ,3"; $res = mysql_query($sql); //Loops through the values while ( $row = mysql_fetch_array($res) ) { ?> <!-- Post #1 --> <li class="clear-fix"> <!-- Date --> <div class="post-list-date"> <div class="post-date-box"> <?php //Timestamp broken down to show accordingly $timestamp = $row['post_date']; $datetime = new DateTime($timestamp); $date = $datetime->format("d"); $month = $datetime->format("M"); ?> <h3> <?php echo $date; ?> </h3> <span> <?php echo $month; ?> </span> </div> </div> <!-- /Date --> <!-- Image + comments count --> <div class="post-list-image"> <!-- Image --> <div class="image image-overlay-url image-fancybox-url"> <a href="post.php" class="preloader-image"> <?php echo '<img src="', $row['image'], '" alt="' , $row['post_title'] , '\'s Blog Image" />'; ?> </a> </div> <!-- /Image --> </div> <!-- /Image + comments count --> <!-- Content --> <div class="post-list-content"> <div> <?php $id = $_GET['ID']; $post = lookup_post_somehow($id); if($post) { // render post } else { echo 'blog post not found..'; } ?> <!-- Header --> <h4> <a href="post.php"> <?php echo $row['post_title']; ?> </a> </h4> <!-- /Header --> <!-- Excerpt --> <p> <?php echo $row ['post_content']; ?> </p> <!-- /Excerpt --> </div> </div> <!-- /Content --> </li> <!-- /Post #1 --> <?php } // close while loop ?> </ul> <!-- /Posts list --> <div><!-- Pagination --> <ul class="blog-pagination clear-fix"> <?php //Count the number of rows $numberofrows = mysql_query("SELECT COUNT(ID) FROM `wp_posts`"); //Do ciel() to round the result according to number of posts $postsperpage = 4; $numOfPages = ceil($numberofrows / $postsperpage); for($i=1; $i < $numOfPages; $i++) { //echos links for each page $paginationDisplay = '<li><a href="blog.php?pn=' . $i . '">' . $i . '</a></li>'; echo $paginationDisplay; } ?> <!-- <li><a href="#" class="selected">1</a></li> <li><a href="#">2</a></li> <li><a href="#">3</a></li> <li><a href="#">4</a></li> --> </ul> </div><!-- /Pagination --> <!-- /div> --> <!-- Left column --> </div> <!-- /Layout 66x33 --> </div> </div> <?php require_once('include/twitter_user_timeline.php'); ?> <?php require_once('include/footer_blog.php'); ?> How do I render?

    Read the article

  • Design guideline for saving big byte stream in c# [migrated]

    - by Praveen
    I have an application where I am receiving big byte array very fast around per 50 miliseconds. The byte array contains some information like file name etc. The data (byte array ) may come from several sources. Each time I receive the data, I have to find the file name and save the data to that file name. I need some guide lines to how should I design it so that it works efficient. Following is my code... public class DataSaver { private static Dictionary<string, FileStream> _dictFileStream; public static void SaveData(byte[] byteArray) { string fileName = GetFileNameFromArray(byteArray); FileStream fs = GetFileStream(fileName); fs.Write(byteArray, 0, byteArray.Length); } private static FileStream GetFileStream(string fileName) { FileStream fs; bool hasStream = _dictFileStream.TryGetValue(fileName, out fs); if (!hasStream) { fs = new FileStream(fileName, FileMode.Append); _dictFileStream.Add(fileName, fs); } return fs; } public static void CloseSaver() { foreach (var key in _dictFileStream.Keys) { _dictFileStream[key].Close(); } } } How can I improve this code ? I need to create a thread maybe to do the saving.

    Read the article

  • Restrictive routing best practices for Google App Engine with python?

    - by Aleksandr Makov
    Say I have a simple structure: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/profile', 'pages.profile'), (r'/dashboard', 'pages.dash'), ], debug=True) Basically all pages require authentication except for the login. If visitor tries to reach a restrictive page and he isn't authorized (or lacks privileges) then he gets redirected to the login view. The question is about the routing design. Should I check the auth and ACL privs in each of the modules (pages.profile and pages.dash from example above), or just pass all requests through the single routing mechanism: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/.+', 'router') ], debug=True) I'm still quite new to the GAE, but my app requires authentication as well as ACL. I'm aware that there's login directive on the server config level, but I don't know how it works and how I can tight it with my ACL logic and what's worse I cannot estimate time needed to get it running. Besides, it looks only to provide only 2 user groups: admin and user. In any case, that's the configuration I use: handlers: - url: /favicon.ico static_files: static/favicon.ico upload: static/favicon.ico - url: /static/* static_dir: static - url: .* script: main.app secure: always Or I miss something here and ACL can be set in the config file? Thanks.

    Read the article

  • QA & Testing with UPK

    - by dan.gallo(at)oracle.com
    Most customers know that UPK produces both the word and excel based test scripts for UAT. Did you know that you can use UPK for QA review and bug tracking? To use UPK for QA, create content and assign it appropriately to authorized reviewers. Then have them open the developer, use customized views to find content assigned to them quickly and check out the topics. Then they can use the topic editor to review the content and provide comments right into the bubbles or use explanation frames. It makes QA-ing content this way easier than publishing and sending out .tpcs or docs for people to review. How about UPK for bug tracking? The hardest part about fixing bugs in software is reproducing the error! When you use UPK for bug tracking, it captures the exact steps the user took that gave them the error. Now development can easily walk through the process in a simulated environment to see what might have caused it, they have a documented procedure for what generated the error and they are able to better communicate with the LOB. Also, they can update or attach the simulation\documentation to any defect management software like bugzilla or something similar -all thanks to UPK.

    Read the article

  • Adding operation in middle of complex sequence diagram in visio 2003

    - by James
    I am using Microsoft Visio 2003 to define static classes with operations/methods and a sequence diagrams referring to these classes. The sequence diagram is almost done, but i realized that i missed one operation in middle of the diagram. When i try to move rest of the sequences down by selecting it as a block, all the operations in the block loose link with static diagrams. ( Methods which were referred to static classes as fun(), became fun, which means that now they no longer refer to static diagrams and any future changes would not be reflected in dynamic sequence diagrams automatically.) The sequence diagrams have grown to A3 size paper and i have many of such diagrams which needs correction. Manually moving the operations one by one would involve lots of effort. Could someone kindly suggest a way to overcome this problem?

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >