Search Results

Search found 33965 results on 1359 pages for 'oracle upk content'.

Page 604/1359 | < Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >

  • GMail suspects confirmation email in stealing personal information

    - by Dennis Gorelik
    When user registers on my web site, web site sends user email confirmation link. Subject: Please confirm your email address Body:Please open this link in your browser to confirm your email address: http://www.postjobfree.com/a/c301718062444f96ba0e358ea833c9b3 This link will expire on: 6/9/2012 8:04:07 PM EST. If my web site sends that email to GMaill (either @gmail.com or another domain that's handled by Google Apps) and that user never emailed to email -- then GMail not only puts the email to spam folder, but also adds prominent red warning:Be careful with this message. Similar messages were used to steal people's personal information. Unless you trust the sender, don't click links or reply with personal information. Learn more That warning really scares many of my users, so they are afraid to open that link and confirm their email. What can I do about it? Ideally I would like that message end up in user's inbox, not spam folder. But at least how do I prevent that scary message? IP address of my mailing server is not blacklisted: http://www.mxtoolbox.com/SuperTool.aspx?action=blacklist%3a208.43.198.72 I use SPF and DKIM signature. Below is the email that ended up in spam folder with that scary red message. Delivered-To: [email protected] Received: by 10.112.84.98 with SMTP id x2csp36568lby; Fri, 8 Jun 2012 17:04:15 -0700 (PDT) Received: by 10.60.25.6 with SMTP id y6mr9110318oef.42.1339200255375; Fri, 08 Jun 2012 17:04:15 -0700 (PDT) Return-Path: Received: from smtp.postjobfree.com (smtp.postjobfree.com. [208.43.198.72]) by mx.google.com with ESMTP id v8si6058193oev.44.2012.06.08.17.04.14; Fri, 08 Jun 2012 17:04:15 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 208.43.198.72 as permitted sender) client-ip=208.43.198.72; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 208.43.198.72 as permitted sender) [email protected]; dkim=pass [email protected] DomainKey-Signature: a=rsa-sha1; c=nofws; q=dns; d=postjobfree.com; s=postjobfree.com; h= received:message-id:mime-version:from:to:date:subject:content-type; b=TCip/3hP1WWViWB1cdAzMFPjyi/aUKXQbuSTVpEO7qr8x3WdMFhJCqZciA69S0HB4 Koatk2cQQ3fOilr4ledCgZYemLSJgwa/ZRhObnqgPHAglkBy8/RAwkrwaE0GjLKup 0XI6G2wPlh+ReR+inkMwhCPHFInmvrh4evlBx/VlA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=postjobfree.com; s=postjobfree.com; h=content-type:subject:date:to:from:mime-version:message-id; bh=N59EIgRECIlAnd41LY4HY/OFI+v1p7t5M9yP+3FsKXY=; b=J3/BdZmpjzP4I6GA4ntmi4REu5PpOcmyzEL+6i7y7LaTR8tuc2h7fdW4HaMPlB7za Lj4NJPed61ErumO66eG4urd1UfyaRDtszWeuIbcIUqzwYpnMZ8ytaj8DPcWPE3JYj oKhcYyiVbgiFjLujib3/2k2PqDIrNutRH9Ln7puz4= Received: from sv3035 (sv3035 [208.43.198.72]) by smtp.postjobfree.com with SMTP; Fri, 8 Jun 2012 20:04:07 -0400 Message-ID: MIME-Version: 1.0 From: "PostJobFree Notification" To: [email protected] Date: 8 Jun 2012 20:04:07 -0400 Subject: Please confirm your email address Content-Type: multipart/alternative; boundary=--boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08 ----boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Please open this link in your browser to confirm your email addre= ss: =0D=0Ahttp://www.postjobfree.com/a/c301718062444f96ba0e358ea8= 33c9b3 =0D=0AThis link will expire on: 6/9/2012 8:04:07 PM EST. =0D=0A ----boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: base64 PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj1Db250ZW50LVR5cGUgY29udGVu dD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij48L2hlYWQ+DQo8Ym9keT48ZGl2 Pg0KUGxlYXNlIG9wZW4gdGhpcyBsaW5rIGluIHlvdXIgYnJvd3NlciB0byBjb25m aXJtIHlvdXIgZW1haWwgYWRkcmVzczo8YnIgLz48YSBocmVmPSJodHRwOi8vd3d3 LnBvc3Rqb2JmcmVlLmNvbS9hL2MzMDE3MTgwNjI0NDRmOTZiYTBlMzU4ZWE4MzNj OWIzIj5odHRwOi8vd3d3LnBvc3Rqb2JmcmVlLmNvbS9hL2MzMDE3MTgwNjI0NDRm OTZiYTBlMzU4ZWE4MzNjOWIzPC9hPjxiciAvPlRoaXMgbGluayB3aWxsIGV4cGly ZSBvbjogNi85LzIwMTIgODowNDowNyBQTSBFU1QuPGJyIC8+DQo8L2Rpdj48L2Jv ZHk+PC9odG1sPg== ----boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08--

    Read the article

  • Adding Descriptive Flex Field (DFF) through OAF Personalization

    - by Manoj Madhusoodanan
    In this blog I will explain how to add a DFF to a existing OAF page through personalization.I am using Supplier Quick Update Page ( /oracle/apps/pos/supplier/webui/SuppSummPG ). If you want to see how to create DFF please click here. In this scenario I am using a custom DFF. Following are the details. Application -> Payables ( Code: SQLAP )Name -> XXCUST_SUPPLIER_DFFTitle -> XXCUST - Supplier DFFTable Name -> AP_SUPPLIERSDFV View name -> XXCUST_SUPPLIER_DFVReference Fields -> ATTRIBUTE_CATEGORY Following are the Context Field Details. Prompt -> Supplier TypeValue Set -> XXCUST_SUP_TYPE ( Values : External and Internal )Reference Field -> ATTRIBUTE_CATEGORY Below table shows the segment details of XXCUST_SUPPLIER_DFF. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Code Segments Column Value Set Global Data Elements Identification Number ATTRIBUTE1 15 Characters External Type ATTRIBUTE2 XXCUST_EXT_SUP_TYPE Values          Domestic           International Internal Department ATTRIBUTE2 15 Characters Following steps you need to perform to create flex item in the Quick Update page. 1) Click on Personalize Page.In the Personalize Page click on Complete View. 2) Click on Create Item.( Based on where you want to place the DFF choose appropriate layout). 3) Create flex item with following details. 4) If you want to arrange the item in the page click on Reorder. Following is the output.

    Read the article

  • Error trapping for a missing data source in a Spring MVC / Spring JDBC web app [migrated]

    - by Geeb
    I have written a web app that uses Spring MVC libraries and Spring JDBC to connect to an Oracle DB. (I don't use any ORM type libraries as I create stored procedures on Oracle that do my stuff and I'm quite happy with that.) I use a connection pool to Oracle managed by the Tomcat container The app generally works absolutely fine by the way! BUT... I noticed the other day when I tried to set up the app on another Tomcat instance that I had forgotten to configure the connection pool and obviously the app could not get hold of an org.apache.commons.dbcp.BasicDataSource object, so it crashed. I define the pool params in the tomcat "context.conf" In my "web.xml" I have: <servlet> <servlet-name>appServlet</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/Spring/appServlet/servlet-context.xml</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>appServlet</servlet-name> <!-- Map *everything* to appServlet --> <url-pattern>/</url-pattern> </servlet-mapping> <resource-ref> <description>Oracle Datasource example</description> <res-ref-name>jdbc/ora1</res-ref-name> <res-type>org.apache.commons.dbcp.BasicDataSource</res-type> <res-auth>Container</res-auth> </resource-ref> And I have a Spring "servlet-context.xml" where JNDI is used to map the data source object provided by the connection pool to a Spring bean with the ID of "dataSource": <jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/ora1" resource-ref="true" /> Here's the question: Where do I trap the case where the database cannot be accessed for whatever reason? I don't want the user to see a yard-and-a-half of Java stack trace in their browser, rather a nicer message that tells them there is a database problem etc. It seems that my app tries to configure the "dataSource" bean (in "servlet-context.xml") before any code has tested it can actually provide a dataSource object from the pool?! Maybe I'm not fully understanding exactly what is going on in these stages of the app firing up ... Thanks for any advice!

    Read the article

  • Nginx location regex is not matching

    - by shtuff.it
    The following has been working to cache css and js for me: location ~ "^(.*)\.(min.)?(css|js)$" { expires max; } results: $ curl -I http://mysite.com/test.css HTTP/1.1 200 OK Server: nginx Date: Thu, 16 Jan 2014 18:55:28 GMT Content-Type: text/css Content-Length: 19578 Last-Modified: Mon, 13 Jan 2014 18:54:53 GMT Connection: keep-alive Expires: Thu, 31 Dec 2037 23:55:55 GMT Cache-Control: max-age=315360000 X-Backend: stage01 Accept-Ranges: bytes I am trying to get versioning setup for my js / css using a 10 digit unix timestamp and am having issues getting a regex match with the following valid a regex. location ~ "^(.*)([\d]{10})\.(min\.)?(css|js)$" { expires max; } results: $ curl -I http://mysite.com/test_1234567890.css HTTP/1.1 200 OK Server: nginx Date: Thu, 16 Jan 2014 19:05:03 GMT Content-Type: text/css Content-Length: 19578 Last-Modified: Mon, 13 Jan 2014 18:54:53 GMT Connection: keep-alive X-Backend: stage01 Accept-Ranges: bytes

    Read the article

  • Quickie Guide Getting Java Embedded Running on Raspberry Pi

    - by hinkmond
    Gary C. and I did a Bay Area Java User Group presentation of how to get Java Embedded running on a RPi. See: here. But, if you want the Quickie Guide on how to get Java up and running on the RPi, then follow these steps (which I'm doing right now as we speak, since I got my RPi in the mail on Monday. Woo-hoo!!!). So, follow along at home as I do the same steps here on my board... 1. Download the Win32DiskImager if you are on Windows, or use dd on a Linux PC: https://launchpad.net/win32-image-writer/0.6/0.6/+download/win32diskimager-binary.zip 2. Download the RPi Debian Wheezy image from here: http://files.velocix.com/c1410/images/debian/7/2012-08-08-wheezy-armel/2012-08-08-wheezy-armel.zip 3. Insert a blank 4GB SD Card into your Windows or Linux PC. 4. Use either Win32DiskImager or Linux dd to burn the unzipped image from #2 to the SD Card. 5. Insert the SD Card into your RPi. Connect an Ethernet cable to your RPi to your network. Connect the RPi Power Adapter. 6. The RPi will boot onto your network. Find its IP address using Windows Wireshark or Linux: sudo tcpdump -vv -ieth0 port 67 and port 68 7. ssh to your RPi: ssh <ip_addr_rpi> -l pi <Password: "raspberry"> 8. Download Java SE Embedded: http://www.oracle.com/technetwork/java/embedded/downloads/javase/index.html NOTE: First click accept, then choose the first bundle in the list: ARMv6/7 Linux - Headless EABI, VFP, SoftFP ABI, Little Endian - ejre-7u6-fcs-b24-linux-arm-vfp-client_headless-10_aug_2012.tar.gz 9. scp the bundle from #8 to your RPi: scp <ejre-bundle> pi@<ip_addr_rpi> 10. mkdir /usr/local, untar the bundle from #9 and rename (move) the ejre1.7.0_06 directory to /usr/local/java That's it! You are ready to roll with Java Embedded on your RPi. Hinkmond

    Read the article

  • BIP Debugging to a file

    - by Tim Dexter
    If you use the standalone server or with OBIEE and use OC4J as the web server. Have you ever taken a looksee at the console window (doc/xterm) that you use to start it. Ever turned on debugging to see masses of info flow by that window and want to capture it all? I have been debugging today and watched all that info fly by and on Windoze gets lost before you can see it! The BIP developers use the System.out.println() and System.err.println()methods in the BIP applications to generate debugging formation. Normally the output from these method calls go to the console where the OC4J process is started. However you can specify command line options when starting OC4J to direct the stdout and stderr output directly to files. The ?out and ?err parameters tell OC4J which file to direct the output to. All you need do is modify the oc4j.cmd file used to start BIP. I didnt get fancy and just plugged in the following to the file under the start section. I just modified the line: set CMDARGS=-config "%SERVER_XML%" -userThreads to set CMDARGS=-config "%SERVER_XML%" -out D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.out -err D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.err -userThreads Bounced the server and I now have a ballooning pair of debug files that I can pour over to my hearts content. The .out file appears to contain BIP only log info and the .err file, OBIEE messages. If you are using another web server to host BIP, just check out the user docs to find out how to get the log files to write. Note to self, remember to turn off the debug when Im done!

    Read the article

  • JNDI Datasource Problem on Tomcat 6, Hibernate

    - by Asuman AKYILDIZ
    I am using Tomcat 6 as application server, Struts-Hibernate and MyEclipse 6.0. My application uses JDBC driver but I should modify it to use JNDI Datasource. I followed steps as described in tomcat 6.0 howto tutorial. I defined my resource in tomcatconf: <Resource name="jdbc/ats" global="jdbc/ats" auth="Container" type="javax.sql.DataSource" driverClassName="oracle.jdbc.OracleDriver" url="jdbc:oracle:thin:@//localhost:1521/MISDEV" username="TEST" password="TEST" maxActive="20" maxIdle="10" maxWait="-1" validationQuery="SELECT 1 from dual" removeAbandoned="true" removeAbandonedTimeout="30" logAbandoned="false"/> I gave reference in my application web.xml: <resource-ref> <description>Oracle Datasource example</description> <res-ref-name>jdbc/ats</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> And I defined datasource-dialect in my hibernate-cfg.xml <property name="connection.datasource">java:comp/env/jdbc/ats</property> <property name="dialect">org.hibernate.dialect.Oracle9Dialect</property> But when I create hibernate session, it can not open the connection: 09:18:11,322 ERROR JDBCExceptionReporter:72 - Connections could not be acquired from the underlying database! org.hibernate.exception.GenericJDBCException: Cannot open connection I also tried to set the properties at runtime: Configuration configuration = new Configuration(); configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.Oracle9Dialect"); //configuration.setProperty("hibernate.connection.datasource", "java:comp/env/jdbc/ats"); configuration.setProperty("hibernate.current_session_context_class", "thread"); configuration.setProperty("hibernate.connection.provider_class", "org.hibernate.connection.C3P0ConnectionProvider"); configuration.setProperty("hibernate.show_sql", "true"); sessionFactory = configuration.configure().buildSessionFactory(); It does not open connection again. But, when I use JDBC driver it works: Configuration configuration = new Configuration(); configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.Oracle9Dialect"); //configuration.setProperty("hibernate.connection.datasource", "java:comp/env/jdbc/ats"); configuration.setProperty("hibernate.connection.url", "jdbc:oracle:thin:@//localhost:1521/MISDEV"); configuration.setProperty("hibernate.connection.username", "test"); configuration.setProperty("hibernate.connection.password", "test"); configuration.setProperty("hibernate.connection.driver_class", "oracle.jdbc.OracleDriver"); configuration.setProperty("hibernate.transaction.factory_class", "org.hibernate.transaction.JDBCTransactionFactory"); configuration.setProperty("hibernate.current_session_context_class", "thread"); configuration.setProperty("hibernate.connection.provider_class", "org.hibernate.connection.C3P0ConnectionProvider"); configuration.setProperty("hibernate.show_sql", "true"); sessionFactory = configuration.configure().buildSessionFactory(); I have been searching for 3 days and no success. What may be de problem?

    Read the article

  • Recruitment Drive - Things Don't Always Go As Planned - Stay Flexible by Kalyan Neelagiri

    - by david.talamelli
    I am one of the Recruiters for Oracle and work in our India Recruitment Team. When we are hiring for multiple positions we often hold Recruitment Events to interview a large number of people as effectively as possible. These Events are often held on the weekend as many people are not free to attend an all day event during the working week. Just recently during a recruitment campaign we were running I was tasked to set up a Recruitment Event for some roles we were hiring for. I have set up and run weekend recruitment events in the past which have all run smoothly. However, this time arranging this recruitment event was quite a challenge for me. The planned event was taking place on a Saturday. I had almost sent out the confirmed scheduled list of candidates to the respective hiring team on Friday and was on track for the event to take place, but unfortunately there was breaking news in the media that there was a strike called in the city because of some political agitations and protests taking place on the event day. The hiring manager had rushed to me asking for my thoughts and ideas. I was in two minds on what to do. One on hand I was not ready to cancel the event because of all the work that so many people had put into getting this prepared and also I did not want to reschedule the event at the last minute if I did not need to. On the other hand I understood it may be best to reschedule the event as people may not be able to attend based on the political protests taking place on the day. In the end I decided to gather and check for other options because this might cause confusion and a problem for the scheduled candidates to drive in to the venue. So we had concluded to reschedule our event plans and moved the event to the next week. The good news is that we successfully executed this recruitment drive the following Saturday. We were glad that 100% of the candidates we able to make it to the new interview date and despite all the agitations in the city we were successful in hiring people for all the roles we had open. Things do not always go as planned. The best laid plans can sometimes be for nought based on external factors outside of our control. What this experience has taught me is that rather than focus on the negatives when you are thrown a curveball the best approach is to stay flexible and focus on finding ways to reach your outcome. Your plans may need to change but you can still achieve the results you are after if you have the right mind set.

    Read the article

  • Consuming Hello World pagelet in WebCenter Spaces

    - by astemkov
    Introduction The goal of this exercise is to show you how can you use Hello World pagelet that you just created from your web space. Assumptions Let's assume the following: Pagelet Producer is running on http://pageletserver.company.com:8889/pagelets/ WebCenter is running on http://webcenter.company.com:8888/webcenter/ You created Hello_World pagelet as described here. For our exercise we will need a space created. So let's login into WebCenter Portal and create a space called "myspace" using "Portal Site" template: Registering Pagelet Producer with WebCenter portal In order to use our newly created pagelet from WebCenter Spaces, we first need to register Pagelet Producer: Click "Administraion" link on WebCenter toolbar Open the "Configuration" tab Click on "Services" link on the upper-left corner of the page Click on "Portlet Producers" link on the right hand pane of the screen Click on "Register" button Select "Pagelet Producer" radio button and type Producer Name = "MyPageletProducer" Server URL = http://pageletserver.company.com:8889/pagelets/ Click "Test" button If everything is succesful you will see the following screen: Now click "OK'. Pagelet producer is registered: Inserting Hello World pagelet to WebCenter Space Now let's insert Hello World pagelet into "myspace" page: Let's go back to "myspace", click on the icon in a upper-right corner of the page and select "Edit Page" Click on one of the "Add Content" buttons: Select "Mash-Ups": Select "Pagelet Producers: You will see the MyPageletProducer that we just registered: Click on it. You will see the library "MyLib" that contains our "Hello_World" pagelet. Click on "MyLib" and you will see "Hello_World" pagelet. Click on "Add" button, and then "Close" button. Click "Save" button, and then "Close". Now we see that our "Hello World" pagelet is inserted into "myspace" page:

    Read the article

  • IIS7 returns 403.1 (execute access denied) for image file

    - by Kristoffer
    I have a web app running in IIS7 on Windows Server 2008. There is a virtual directory pointing to a shared folder "/Content/Data" on another machine (running Windows Server 2003), as well as a real directory "/Content/Images" on the local machine (web app sub folder). Accessing images in "/Content/Images" is no problem, but when an image (e.g. a JPEG file) in the "/Content/Data" is accessed by a browser, IIS returns this error: HTTP Error 403.1 - Forbidden: Execute access is denied. However, the web app can read and write to / from it. I assume IIS and ASP.NET are running under different user accounts? Does anyone have an idea on what I have to do to make it work? I have set the permissions on the shared folder to Everyone Full Control, with no luck.

    Read the article

  • Debian's Wordpress with broken plugin path?

    - by Vinícius Ferrão
    I've installed an Wordpress from Debian Wheezy package system and the plugins folder appears to be broken. As stated in the error log files of Apache2: [error] File does not exist: /var/lib/wordpress/wp-content/plugins/var The plugins are looking for an URL based on the full path, and not on the relative path. I can "temporary fix" the problem making a symbolic link to /var on the plugins folder, but I know that this is wrong and dirty. I don't know where to start debugging this. So any help is welcome. Additional information: /etc/wordpress/htaccess # Multisites generated htaccess RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] # add a trailing slash to /wp-admin RewriteRule ^([_0-9a-zA-Z-]+/)?wp-admin$ $1wp-admin/ [R=301,L] RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^ - [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(wp-(content|admin|includes).*) $2 [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(.*\.php)$ $2 [L] RewriteRule . index.php [L] Apache2 Configuration File: <VirtualHost *:80> Alias /wp-content /var/lib/wordpress/wp-content DocumentRoot /usr/share/wordpress ServerAdmin [email protected] <Directory /usr/share/wordpress> Options FollowSymLinks AllowOverride Limit Options FileInfo DirectoryIndex index.php Order allow,deny Allow from all </Directory> <Directory /var/lib/wordpress/wp-content> Options FollowSymLinks Order allow,deny Allow from all </Directory> </VirtualHost> Thanks in advance,

    Read the article

  • 2D Barcode Addendum

    - by Tim Dexter
    Having finally got my external drive back(long story) today from Oklahoma (thank you so much Sammy) Im back with a full compliment of Oracle and blogging tools at my disposal. I have missed JDeveloper this past week, which I have found, I immensely prefer over Eclipse (let the flaming commence :0) I use Zoundry Raven for writing articles and its not installed locally but on my external drove, so I have been soldiering on with the blog server's pain in the backside UI for writing. Now I have my favority editor back and things are calming down workwise, I will start to get the Excel template posts out. Today thou, a note about 2D barcode support or more specifically any barcode that needs some data manipulation before the barcode font is applied. I wrote about these fonts a long time back and laid out the java class you would need to write if you had an algorithm from the font manufacturer to use. I missed out a valuable point and James at Luminex fell into the trap. He was wanting to use the datamatrix font from IDAutomation but and had built the java class to be called from the RTF template but it was not encoding or at least did not appear to be. New debugging feature to the rescue. Kan over at the bipconsultng blog documented the feature a while back. Just adding <?xdo-debug-level:'STATEMENT'?> to my test template generated all the debug files in my c:\temp directory. No messing with files, just a simple command ... at last! Kan has documented the feature here. With the log in hand I spotted a java error stack referencing a missing code128a method, huh? Looking at James' class he had the following snippet: ENCODERS.put("code128a",mUtility.getClass().getMethod("code128a",clazz)); ENCODERS.put("code128b",mUtility.getClass().getMethod("code128b", clazz)); ENCODERS.put("code128c",mUtility.getClass().getMethod("code128c", clazz)); ENCODERS.put("pdf417",mUtility.getClass().getMethod("pdf417", clazz)); ENCODERS.put("datamatrix",mUtility.getClass().getMethod("datamatrix", clazz)); His class did not include the other code128 and pdf147 methods and BIP was expecting them. An easy fix, just comment them out, rebuild and deploy and the encoding started working. If you are hitting similar problems, check that class and ensure all of the referenced methods are available, if not, delete or get commenting. James now has purdy labels popping out that his hard ware can read, sweet!

    Read the article

  • Apache mod_rewrite and mod_vhost_alias Virtual Hosts and %1

    - by Matt Wall
    I have put the main bits of my httpd.conf down below. I am using %1 to get the host field so I can dynamically add vhosts by just creating dns/folders. One problem is I need to reference this: HttpStreamingLiveEventPath "D:/FMSApps/%1" HttpStreamingContentPath "D:/FMSApps/%1" In Apache when I try say to do this: http://test.domain.com/hds-vod/myfile.mp4.f4m it sees the %1 in the logs, and fails. Apache gives me this: [error] mod_jithttp [403]: No access to D:/Content/%1/DefaultContent/eve.mp4 What I'm looking for is the D:/Content/%1/DefaultContent/eve.mp4 to become D:/Content/test/DefaultContent/eve.mp4 Anyone have any useful resources / hints etc. to help me? Meanwhile my Google searching continues...! Listen 80 ServerName main1.rtmphost.com AccessFileName .htaccess ServerSignature On UseCanonicalName Off HostnameLookups Off Timeout 120 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 15 RewriteLogLevel 0 RewriteLog logs/rewrite.log DocumentRoot D:/Content LoadModule vhost_alias_module modules/mod_vhost_alias.so VirtualDocumentRoot "D:/Content/%1" RewriteEngine On <Directory /> Options None AllowOverride None Order allow,deny Allow from all Satisfy all </Directory> <IfModule f4fhttp_module> <Location /vod> HttpStreamingEnabled true HttpStreamingContentPath "D:/FMSApps/%1" Options FollowSymLinks </Location> Redirect 301 /live/events/livepkgr/events /hds-live/livepkgr <Location /hds-live> HttpStreamingEnabled true HttpStreamingLiveEventPath "D:/FMSApps/%1" HttpStreamingContentPath "D:/FMSApps/%1" HttpStreamingF4MMaxAge 2 HttpStreamingBootstrapMaxAge 2 HttpStreamingFragMaxAge -1 Options FollowSymLinks </Location> </IfModule>

    Read the article

  • Engagement: Don’t Forget Your Employees!

    - by Kellsey Ruppel
    By Mark Brown, Sr. Director, Oracle WebCenter  This week we want to focus on Employee Engagement, and how it is critical to your business. Today we hear and read a great deal about “Customer Engagement” – and rightly so, it is those customers, whether they be traditional paying customers, citizens, students, club members, or whomever it is that are “paying the bills”.  A more engaged customer is more likely to make it easier to pay those bills by buying more, giving good reviews, or spreading the word of how wonderful their experience was. But what about those who are providing those services, those who design and make those goods; why is it that all too often they are left out of conversations concerning engagement? In fact, it is critical that we consider our employees as customers since they are using internal systems that run your organization the same way customers use external systems. Studies have shown that an organization in which the employees feel “engaged” or better able to make decisions, do their jobs, and are connected to their peers have better return to their stakeholders. (shareholders).  On the surface this seems obvious, happy employees are more productive employees. But it leads to the question – how many of our existing policies, systems and processes are actually reducing that level of engagement? Let’s look at a couple examples. If posting new information that may be of great value to everyone in the larger organization is hard to do because we use an antiquated system, then we’re making it hard to share and increasing the potential for duplicate work. If it is not trivially obvious how to create and publish this post, then chances are very high that I’ll put it on the bottom of my queue. And finally, when critical information is spread across various systems, intranet sites, workgroups and peoples inboxes, then it is very hard to learn and grow from that information.  These may sound trivial, but how often do we push things off not because it is intellectually challenging, we may have the answer at our fingertips, but because it is hard to make that information readily available.  If an engaged employee is a productive employee, then what can we do to increase their level of engagement? We can start by looking for opportunities to provide self-documenting self-service solutions. Our newer employees grew up using simplified web interfaces everyday and they loathe calling a help-desk unless it is the last resort. Sadly, many of our enterprise applications have not kept pace and we all still have processes that are based on sending an email -- like discount approvals, vacation requests, or even offer-letter approvals.   My suggestion is to pick one highly visible, high-impact process where employees are either reticent to execute on the process or openly complain about how cumbersome it is and look at the mechanism for that process. If there are better ways, streamlined steps, better UIs that could be done, then you have a candidate to reconfigure that process and make it more engaging. Looking to better engage your employees? Start here!

    Read the article

  • New Experts Direct Contribution - Multiple Currency in Analytics

    - by Cheryl
    We do our best to anticipate what you need to know when we design and write our courses for CRM On Demand. But we know that we cannot hit on every situation or implementation scenario that you might encounter. That's why I love our Experts Direct program - this is where we encourage our wide network of CRM On Demand experts to contribute knowledge that they have gained from working directly with companies on their specific challenges or questions. (See Direct From Our Experts!) The latest Experts Direct contribution comes from Leon Dolman, who works with CRM On Demand customers every day. Leon addresses what you should expect to see in your reports and in the application when your company's users enter opportunity revenue information in more than one currency. He works through a scenario to show how currency settings can affect the data that you see in your reports. For example, do you know what will you see in your Opportunity reports if you have two different currencies represented, besides your company's default currency, but your company administrator has only set exchange rates for one of them? Leon knows...and now he has shared that knowledge - and more - with the rest of us. Go to the Multiple Currency in Analytics item in the Training and Support Center to read more - and while you're there, take a look at the other Experts Direct content to tap into that expert knowledge that we're collecting for you. Just click the Browse More Topics link in the Experts Direct box on the home page to see the full list. And let us know if there are other topics that you'd like to see our experts address. Post a comment to start a conversation or send us an email.

    Read the article

  • The Work Order Printing Challenge

    - by celine.beck
    One of the biggest concerns we've heard from maintenance practitioners is the ability to print and batch print work order details along with its accompanying attachments. Indeed, maintenance workers traditionally rely on work order packets to complete their job. A standard work order packet can include a variety of information like equipment documentation, operating instructions, checklists, end-of-task feedback forms and the likes. Now, the problem is that most Asset Lifecycle Management applications do not provide a simple and efficient solution for process printing with document attachments. Work order forms can be easily printed but attachments are usually left out of the printing process. This sounds like a minor problem, but when you are processing high volume of work orders on a regular basis, this inconvenience can result in important inefficiencies. In order to print work order and its related attachments, maintenance personnel need to print the work order details and then go back to the work order and open each individual attachment using the proper authoring application to view and print each document. The printed output is collated into a work order packet. The AutoVue Document Print Service products that were just released in April 2010 aim at helping organizations address the work order printing challenge. Customers and partners can leverage the AutoVue Document Print Services to build a complete printing solution that complements their existing print server solution with AutoVue's document- and platform-agnostic document print services. The idea is to leverage AutoVue's printing services to invoke printing either programmatically or manually directly from within the work order management application, and efficiently process the printing of complete work order packets, including all types of attachments, from office files to more advanced engineering documents like 2D CAD drawings. Oracle partners like MIPRO Consulting, specialists in PeopleSoft implementations, have already expressed interest in the AutoVue Document Print Service products for their ability to offer print services to the PeopleSoft ALM suite, so that customers are able to print packages of documents for maintenance personnel. For more information on the subject, please consult MIPRO Consulting's article entitled Unsung Value: Primavera and AutoVue Integration into PeopleSoft posted on their blog. The blog post entitled Introducing AutoVue Document Print Service provides additional information on how the solution works. We would also love to hear what your thoughts are on the topic, so please do not hesitate to post your comments/feedback on our blog. Related Articles: Introducing AutoVue Document Print Service Print Any Document Type with AutoVue Document Print Services

    Read the article

  • Ajax-based data loading using jQuery.load() function in ASP.NET

    - by hajan
    In general, jQuery has made Ajax very easy by providing low-level interface, shorthand methods and helper functions, which all gives us great features of handling Ajax requests in our ASP.NET Webs. The simplest way to load data from the server and place the returned HTML in browser is to use the jQuery.load() function. The very firs time when I started playing with this function, I didn't believe it will work that much easy. What you can do with this method is simply call given url as parameter to the load function and display the content in the selector after which this function is chained. So, to clear up this, let me give you one very simple example: $("#result").load("AjaxPages/Page.html"); As you can see from the above image, after clicking the ‘Load Content’ button which fires the above code, we are making Ajax Get and the Response is the entire page HTML. So, rather than using (old) iframes, you can now use this method to load other html pages inside the page from where the script with load function is called. This method is equivalent to the jQuery Ajax Get method $.get(url, data, function () { }) only that the $.load() is method rather than global function and has an implicit callback function. To provide callback to your load, you can simply add function as second parameter, see example: $("#result").load("AjaxPages/Page.html", function () { alert("Page.html has been loaded successfully!") }); Since load is part of the chain which is follower of the given jQuery Selector where the content should be loaded, it means that the $.load() function won't execute if there is no such selector found within the DOM. Another interesting thing to mention, and maybe you've asked yourself is how we know if GET or POST method type is executed? It's simple, if we provide 'data' as second parameter to the load function, then POST is used, otherwise GET is assumed. POST $("#result").load("AjaxPages/Page.html", { "name": "hajan" }, function () { ////callback function implementation });   GET $("#result").load("AjaxPages/Page.html", function () { ////callback function implementation });   Another important feature that $.load() has ($.get() does not) is loading page fragments. Using jQuery's selector capability, you can do this: $("#result").load("AjaxPages/Page.html #resultTable"); In our Page.html, the content now is: So, after the call, only the table with id resultTable will load in our page.   As you can see, we have loaded only the table with id resultTable (1) inside div with id result (2). This is great feature since we won't need to filter the returned HTML content again in our callback function on the master page from where we have called $.load() function. Besides the fact that you can simply call static HTML pages, you can also use this function to load dynamic ASPX pages or ASP.NET ASHX Handlers . Lets say we have another page (ASPX) in our AjaxPages folder with name GetProducts.aspx. This page has repeater control (or anything you want to bind dynamic server-side content) that displays set of data in it. Now, I want to filter the data in the repeater based on the Query String parameter provided when calling that page. For example, if I call the page using GetProducts.aspx?category=computers, it will load only computers… so, this will filter the products automatically by given category. The example ASPX code of GetProducts.aspx page is: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="GetProducts.aspx.cs" Inherits="WebApplication1.AjaxPages.GetProducts" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <table id="tableProducts"> <asp:Repeater ID="rptProducts" runat="server"> <HeaderTemplate> <tr> <th>Product</th> <th>Price</th> <th>Category</th> </tr> </HeaderTemplate> <ItemTemplate> <tr> <td> <%# Eval("ProductName")%> </td> <td> <%# Eval("Price") %> </td> <td> <%# Eval("Category") %> </td> </tr> </ItemTemplate> </asp:Repeater> </ul> </div> </form> </body> </html> The C# code-behind sample code is: public partial class GetProducts : System.Web.UI.Page { public List<Product> products; protected override void OnInit(EventArgs e) { LoadSampleProductsData(); //load sample data base.OnInit(e); } protected void Page_Load(object sender, EventArgs e) { if (Request.QueryString.Count > 0) { if (!string.IsNullOrEmpty(Request.QueryString["category"])) { string category = Request.QueryString["category"]; //get query string into string variable //filter products sample data by category using LINQ //and add the collection as data source to the repeater rptProducts.DataSource = products.Where(x => x.Category == category); rptProducts.DataBind(); //bind repeater } } } //load sample data method public void LoadSampleProductsData() { products = new List<Product>(); products.Add(new Product() { Category = "computers", Price = 200, ProductName = "Dell PC" }); products.Add(new Product() { Category = "shoes", Price = 90, ProductName = "Nike" }); products.Add(new Product() { Category = "shoes", Price = 66, ProductName = "Adidas" }); products.Add(new Product() { Category = "computers", Price = 210, ProductName = "HP PC" }); products.Add(new Product() { Category = "shoes", Price = 85, ProductName = "Puma" }); } } //sample Product class public class Product { public string ProductName { get; set; } public decimal Price { get; set; } public string Category { get; set; } } Mainly, I just have sample data loading function, Product class and depending of the query string, I am filtering the products list using LINQ Where statement. If we run this page without query string, it will show no data. If we call the page with category query string, it will filter automatically. Example: /AjaxPages/GetProducts.aspx?category=shoes The result will be: or if we use category=computers, like this /AjaxPages/GetProducts.aspx?category=computers, the result will be: So, now using jQuery.load() function, we can call this page with provided query string parameter and load appropriate content… The ASPX code in our Default.aspx page, which will call the AjaxPages/GetProducts.aspx page using jQuery.load() function is: <asp:RadioButtonList ID="rblProductCategory" runat="server"> <asp:ListItem Text="Shoes" Value="shoes" Selected="True" /> <asp:ListItem Text="Computers" Value="computers" /> </asp:RadioButtonList> <asp:Button ID="btnLoadProducts" runat="server" Text="Load Products" /> <!-- Here we will load the products, based on the radio button selection--> <div id="products"></div> </form> The jQuery code: $("#<%= btnLoadProducts.ClientID %>").click(function (event) { event.preventDefault(); //preventing button's default behavior var selectedRadioButton = $("#<%= rblProductCategory.ClientID %> input:checked").val(); //call GetProducts.aspx with the category query string for the selected category in radio button list //filter and get only the #tableProducts content inside #products div $("#products").load("AjaxPages/GetProducts.aspx?category=" + selectedRadioButton + " #tableProducts"); }); The end result: You can download the code sample from here. You can read more about jQuery.load() function here. I hope this was useful blog post for you. Please do let me know your feedback. Best Regards, Hajan

    Read the article

  • Having trouble Getting "RTSP over HTTP"

    - by Muhammad Adeel Zahid
    There is an axis camera that is connected to our site (camba.tv) through axis one click connection component (which acts as proxy). We can communicate with this camera only through http by setting the proxy to our OCCC server's address. If we want to get RTSP streams (h.264) we are only left with "RTSP over HTTP" option. For this I have followed axis VAPIX 3 documentation section 3.3. I issue requests through fiddler but don't get any response. But when i put the URL (axrtsphttp://1.00408CBEA38B/axis-media/media.amp) in windows media player (with proxy set to OCCC server 212.78.237.156:3128) the player is able to get RTSP stream over HTTP after logging in. I have created a request trace of communication between camera and windows media player through wireshark and the request that brings the stream looks like http://1.00408cbea38b/axis-media/media.amp HTTP/1.1 x-sessioncookie: 619 User-Agent: Axis AMC Host: 1.00408CBEA38B Proxy-Connection: Keep-Alive Pragma: no-cache Authorization: Digest username="root",realm="AXIS_00408CBEA38B",nonce="000a8b40Y0100409c13ac7e6cceb069289041d8feb1691",uri="/axis-media/media.amp",cnonce="9946e2582bd590418c9b70e1b17956c7",nc=00000001,response="f3cab86fc84bfe33719675848e7fdc0a",qop="auth" HTTP/1.0 200 OK Content-Type: application/x-rtsp-tunnelled Date: Tue, 02 Nov 2010 11:45:23 GMT RTSP/1.0 200 OK CSeq: 1 Content-Type: application/sdp Content-Base: rtsp://1.00408CBEA38B/axis-media/media.amp/ Date: Tue, 02 Nov 2010 11:45:23 GMT Content-Length: 410 v=0 o=- 1288698323798001 1288698323798001 IN IP4 1.00408CBEA38B s=Media Presentation e=NONE c=IN IP4 0.0.0.0 b=AS:50000 t=0 0 a=control:* a=range:npt=0.000000- m=video 0 RTP/AVP 96 b=AS:50000 a=framerate:30.0 a=transform:1,0,0;0,1,0;0,0,1 a=control:trackID=1 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1; profile-level-id=420029; sprop-parameter-sets=Z0IAKeNQFAe2AtwEBAaQeJEV,aM48gA== RTSP/1.0 200 OK CSeq: 2 Session: 3F4763D8; timeout=60 Transport: RTP/AVP/TCP;unicast;interleaved=0-1;ssrc=060922C6;mode="PLAY" Date: Tue, 02 Nov 2010 11:45:24 GMT RTSP/1.0 200 OK CSeq: 3 Session: 3F4763D8 Range: npt=0- RTP-Info: url=rtsp://1.00408CBEA38B/axis-media/media.amp/trackID=1;seq=7392;rtptime=4190934902 Date: Tue, 02 Nov 2010 11:45:24 GMT [Binary Stream Content] But when i copy this request to fiddler, I only get 200 status code with content-type set to application/x-rtsp-tunneled and there is no stream data. The only thing i do different with stream is to use Basic in authorization header instead of Digest and I do not get 401 (Un authorized) status code. Can anyone explain what's happening here? How can I write request sequences to get stream in fiddler? If it is needed, I can upload the wireshark request dump somewhere.

    Read the article

  • The Loneliest Road in America and the OTN Garage

    - by rickramsey
    Source I never told anyone how the image of the OTN Garage on Facebook came to be. I took the Facebook picture on Route 50 in Nevada, USA, in October of 2010. I was riding from Colorado to Oracle OpenWorld in San Francisco, so it was probably October. Route 50 is known as "The Loneliest Road in America." There are roads across Nevada that have even LESS traffic, but Route 50 still one. desolate. road. Although I have seen stranger things while riding along Nevada's Extraterrestrial Highway, I still run across notable oddities every time I ride Route 50. Like the old man with a bandolero of water bottles jogging along the side of the highway in the middle of the day, 50 miles from the closest town. First ultra-marathoner I'd seen in action. He waved at me. Or the dozen Corvettes with California license plates driving toward me, all doing the speed limit in the middle of nowhere because they were being tailed by half a dozen Nevada state troopers. #fail. I don't remember which town I was in, but I noticed the building when I stopped at the gas station. While standing there pouring fuel into the Harley, the store caught my eye. So I pulled the bike in front and walked inside. The owner is a little old lady, about 100 years old. Most of the goods she had on the shelves looked like they had been placed there during WWII. She was itty bitty and could barely see over the counter, but she was so happy when I bought a bar of Hershey's chocolate that she gave me a five cent discount. I took a few pictures and, when I got back, Kemer Thomson, who sometimes blogs here, photoshopped the OTN Garage and Oil Change signs onto it. The bike is a 2009 Road King Classic with a Bob Dron fairing and a Corbin heated seat. The seat came in handy when I rode home over Tioga Pass. The Road King is a very comfy touring bike with a great Harley rumble. I'm kinda sorry I sold it. When I stopped for fuel about 75 miles down the road at the next town, I peeled back the chocolate bar. I had turned into powder. Probably 50 years ago. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Universal navigation menu across domains

    - by Jon Harley
    I'd like to start by saying that I've searched for hours and could not find a definitive answer to my question. Across different sites on different second-level domains exists a universal navigation bar with a collection of roughly 30 links. This universal bar is exactly the same for every page on each domain. The bar's HTML, CSS and JavaScript are all stored in a subfolder for each domain and the HTML is embedded upon serving the page and is not being injected on the client side. None of the links use any rel directives and are as vanilla as can be. My question is about Google's duplicate content rule. Would something like this be considered duplicate content? Matt Cutt's blog post about duplicate content mentions boilerplate repetition, but then he mentions lengthy legalese. Since the text in this universal bar is brief and uses common terms, I wonder if this same rule applies. If this is considered duplicate content, what would be a good way to correct the problem? Thank you for your help.

    Read the article

  • Apache2's recursive directory permission requirement

    - by Sn3akyP3t3
    The experience I've had thus far is from Ubuntu 10.04 and 12.04 64 bit OS so if there are other OS differences I'd like to know if this is an OS specific problem or not. The issue I've experienced is mostly confusion. Once the cause of the problem is identified and corrected there are no further related problems experienced. The symptom is Error 403 forbidden. Typically the cause is attempting to use a directory other than /var/www/ for content. The cause is simply permissions, but its puzzling why the required permissions must persist from at least one level deeper than root onward till the current working directory where the content is stored. For example: Alias /example/ "/home/user/permissions/can/be/confusing/with/apache/" <Directory /home/user/permissions/can/be/confusing/with/apache/> Options FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> With www-data being the user that spawned apache and "user" being a member of the www-data group. Thus, if ownership of /home/user/* is user:user then all that is necessary to display content with apache is permssions of read and execute. So d---r-x--- should suffice, but for practical purposes I'm using drwxr-x--- for most. However, if all directories /home/user/* are permissions of drwxr-x-- and /home/user/ itself has permissions of drwx------ then content will always fail with error 403. This is strange because it doesn't follow what I would consider traditional logic of permissions which should only be applicable to the current working directory or a particular file in that directory and not any directory further back in the chain. Is this by design or is it a bug?

    Read the article

  • DKIM, SPF, PTR records are not working properly with my domain

    - by shihon
    I configured my server and well authenticate email system with DKIM key, SPF record and PTR records, when i start to sent out mails from phplist interface to my users ~50000, my domain is spammed by google. In headers, signed by and mailed by tag shows by my domain : appmail.co, I also test my domain via check mail provide by port25, report is: This message is an automatic response from Port25's authentication verifier service at verifier.port25.com. The service allows email senders to perform a simple check of various sender authentication mechanisms. It is provided free of charge, in the hope that it is useful to the email community. While it is not officially supported, we welcome any feedback you may have at . Thank you for using the verifier, The Port25 Solutions, Inc. team ========================================================== Summary of Results SPF check: pass DomainKeys check: neutral DKIM check: pass Sender-ID check: pass SpamAssassin check: ham ========================================================== Details: HELO hostname: app.appmail.co Source IP: 108.179.192.148 mail-from: [email protected] SPF check details: Result: pass ID(s) verified: [email protected] DNS record(s): appmail.co. SPF (no records) appmail.co. 14400 IN TXT "v=spf1 +a +mx +ip4:108.179.192.148 ?all" appmail.co. 14400 IN A 108.179.192.148 DomainKeys check details: Result: neutral (message not signed) ID(s) verified: [email protected] DNS record(s): DKIM check details: Result: pass (matches From: [email protected]) ID(s) verified: header.d=appmail.co Canonicalized Headers: content-type:multipart/alternative;'20'boundary=047d7b2eda75d8544d04c17b6841'0D''0A' to:[email protected]'0D''0A' from:shashank'20'sharma'20'<[email protected]>'0D''0A' subject:Test'0D''0A' message-id:<CADnDhbH9aDBk3Ho2-CrG7gwOoD6RNX0sFq4bWL64+kmo=9HjWg@mail.gmail.com>'0D''0A' date:Sat,'20'2'20'Jun'20'2012'20'16:44:50'20'+0530'0D''0A' mime-version:1.0'0D''0A' dkim-signature:v=1;'20'a=rsa-sha256;'20'q=dns/txt;'20'c=relaxed/relaxed;'20'd=appmail.co;'20's=default;'20'h=Content-Type:To:From:Subject:Message-ID:Date:MIME-Version;'20'bh=GS6uwlT+weKcrrLJ2I+cjBtWPq9nvhwRlNAJebOiQOk=;'20'b=; Canonicalized Body: --047d7b2eda75d8544d04c17b6841'0D''0A' Content-Type:'20'text/plain;'20'charset=UTF-8'0D''0A' '0D''0A' Hello'20'Senders'0D''0A' '0D''0A' --047d7b2eda75d8544d04c17b6841'0D''0A' Content-Type:'20'text/html;'20'charset=UTF-8'0D''0A' '0D''0A' Hello'20'Senders'0D''0A' '0D''0A' --047d7b2eda75d8544d04c17b6841--'0D''0A' DNS record(s): default._domainkey.appmail.co. 14400 IN TXT "v=DKIM1; k=rsa; p=MHwwDQYJKoZIhvcNAQEBBQADawAwaAJhALGCOdMeZRxRHoatH7/KCvI1CKS0wOOsTAq0LLgPsOpMolifpVQDKOWT2zq/6LHVmDVjXLbnWO2d4ry/riy7ei66pLpnAV5ceIUSjBRusI8jcF9CZhPrh/OImsKVUb9ceQIDAQAB;" NOTE: DKIM checking has been performed based on the latest DKIM specs (RFC 4871 or draft-ietf-dkim-base-10) and verification may fail for older versions. If you are using Port25's PowerMTA, you need to use version 3.2r11 or later to get a compatible version of DKIM. Sender-ID check details: Result: pass ID(s) verified: [email protected] DNS record(s): appmail.co. SPF (no records) appmail.co. 14400 IN TXT "v=spf1 +a +mx +ip4:108.179.192.148 ?all" appmail.co. 14400 IN A 108.179.192.148 SpamAssassin check details: SpamAssassin v3.3.1 (2010-03-16) Result: ham (-0.1 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain 0.0 HTML_MESSAGE BODY: HTML included in message -0.5 BAYES_05 BODY: Bayes spam probability is 1 to 5% [score: 0.0288] -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.5 SINGLE_HEADER_1K A single header contains 1K-2K characters ========================================================== Original Email Return-Path: <[email protected]> Received: from app.appmail.co (108.179.192.148) by verifier.port25.com id hp7qqo11u9cc for <[email protected]>; Sat, 2 Jun 2012 07:14:52 -0400 (envelope-from <[email protected]>) Authentication-Results: verifier.port25.com; spf=pass [email protected] Authentication-Results: verifier.port25.com; domainkeys=neutral (message not signed) [email protected] Authentication-Results: verifier.port25.com; dkim=pass (matches From: [email protected]) header.d=appmail.co Authentication-Results: verifier.port25.com; sender-id=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=appmail.co; s=default; h=Content-Type:To:From:Subject:Message-ID:Date:MIME-Version; bh=GS6uwlT+weKcrrLJ2I+cjBtWPq9nvhwRlNAJebOiQOk=;b=pNw3UQNMoNyZ2Ujv8omHGodKVu/55S8YdBEsA5TbRciga/H7f+5noiKvo60vU6oXYyzVKeozFHDoOEMV6m5UTgkdBefogl+9cUIbt5CSrTWA97D7tGS97JblTDXApbZH; Received: from mail-pb0-f46.google.com ([209.85.160.46]:57831) by app.appmail.co with esmtpa (Exim 4.77) (envelope-from <[email protected]>) id 1SamIF-00055f-Om for [email protected]; Sat, 02 Jun 2012 16:44:51 +0530 Received: by pbbrp8 with SMTP id rp8so4165728pbb.5 for <[email protected]>; Sat, 02 Jun 2012 04:14:51 -0700 (PDT) MIME-Version: 1.0 Received: by 10.68.216.33 with SMTP id on1mr19414885pbc.105.1338635690988; Sat, 02 Jun 2012 04:14:50 -0700 (PDT) Received: by 10.143.66.13 with HTTP; Sat, 2 Jun 2012 04:14:50 -0700 (PDT) Date: Sat, 2 Jun 2012 16:44:50 +0530 Message-ID: <CADnDhbH9aDBk3Ho2-CrG7gwOoD6RNX0sFq4bWL64+kmo=9HjWg@mail.gmail.com> Subject: Test From: shashank sharma <[email protected]> To: [email protected] Content-Type: multipart/alternative; boundary=047d7b2eda75d8544d04c17b6841 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - app.appmail.co X-AntiAbuse: Original Domain - verifier.port25.com X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - appmail.co --047d7b2eda75d8544d04c17b6841 Content-Type: text/plain; charset=UTF-8 Hello Senders --047d7b2eda75d8544d04c17b6841 Content-Type: text/html; charset=UTF-8 Hello Senders --047d7b2eda75d8544d04c17b6841-- I also tried to send mail on yahoo , rediff but i get mails in spam. Please help me to sort out this issue

    Read the article

  • Why My Adsense Account is not accepted

    - by Muhammad Adeel Zahid
    I have created a blog on blogger few days ago and created two blog entries on it. when i try creating AdSense account with blog's address it does not accept the application due to PageType. i have searched around on the net and found that its probably due to duplicate or insufficient content on the blog but either of my blog entry is more than 2,000 words and i have literally typed 80 percent of its content with only few code blocks copied from other sites. Below is content of email i received as response to my application. Hello, Thank you for your interest in Google AdSense. Unfortunately, after reviewing your application, we're unable to accept you into AdSense at this time. We did not approve your application for the reasons listed below. Issues: - Page Type Further detail: Page Type: In order to participate in Google AdSense, publishers' websites and application information must satisfy the following guidelines: Your website must be your own top-level domain (www.example.com and not www.example.com/mysite). You must provide accurate personal information with your application that matches the information on your domain registration. Your website must contain substantial, original content. Your site must comply with Google AdSense program policies: https://www.google.com/adsense/policies" which include Google's webmaster quality guidelines: http://www.google.com/support/webmasters/bin/answer.py?answer=35769#quality . If your site satisfies the above criteria in the future, please resubmit your application and we'll review it as soon as possible My Personal credentials are also same as they appear in my google account. i can't figure out the problem. Any Help/Suggestion is highly appreciated. regards

    Read the article

  • Can I save & store a user's submission in a way that proves that the data has not been altered, and that the timestamp is accurate?

    - by jt0dd
    There are many situations where the validity of the timestamp attached to a certain post (submission of information) might be invaluable for the post owner's legal usage. I'm not looking for a service to achieve this, as requested in this great question, but rather a method for the achievement of such a service. For the legal (in most any law system) authentication of text content and its submission time, the owner of the content would need to prove: that the timestamp itself has not been altered and was accurate to begin with. that the text content linked to the timestamp had not been altered I'd like to know how to achieve this via programming (not a language-specific solution, but rather the methodology behind the solution). Can a timestamp be validated to being accurate to the time that the content was really submitted? Can data be stored in a form that it can be read, but not written to, in a proven way? In other words, can I save & store a user's submission in a way that proves that the data has not been altered, and that the timestamp is accurate? I can't think of any programming method that would make this possible, but I am not the most experienced programmer out there. Based on MidnightLightning's answer to the question I cited, this sort of thing is being done. Clarification: I'm looking for a method (hashing, encryption, etc) that would allow an average guy like me to achieve the desired effect through programming. I'm interested in this subject for the purpose of Defensive Publication. I'd like to learn a method that allows an every-day programmer to pick up his computer, write a program, pass information through it, and say: I created this text at this moment in time, and I can prove it. This means the information should be protected from the programmer who writes the code as well. Perhaps a 3rd party API would be required. I'm ok with that.

    Read the article

  • A Technique for Performing Cross-host Upgrades to FMW 11gR1

    - by reza.shafii
    The main tool used for the upgrade of iAS 10g mid-tier (data not stored in 10g meta-data repository schemas) environments to Fusion Middleware (FMW) 11gR1 is the FMW Upgrade Assistant (UA). This tool performs what we call an out-of-place upgrade which in a nut-shell means the following: Upgrade is performed by pointing the UA to a 10g source topology as well as an 11g destination topology. The destination topology must be created, using the standard FMW 11g installation and configuration process, prior to the execution of the UA. The UA carries over all of the required changes from the source environment to the destination. This approach has a number of advantages rooted in the fact that the source environment - which is presumably working well and serving its needs - is not disturbed during the upgrade process as the UA only performs read-only operations on it. The UA today can only perform such out-of-place upgrades when the source and destination topologies reside on the same machine. This can sometimes be an issue when the host on which the iAS 10g environment is installed is running at full capacity and installing new hardware for the purpose of the upgrade (in most cases what would be needed is extra memory) is completely infeasible. In such cases, upgrade across a different host is still possible by using the following technique: Backup your source environment and restore it on to a target machine. The backup and restore procedures for the iAS 10.1.2 components are described within this section of the release's Administration Guide. As described in the docs, the Oracle Application Server Backup and Recovery Tool provides capabilities for backing up the installation on one machine and restoring it on another which is exactly what you want to do for the purpose of cross host upgrade. Ensure that the restored environment on your target host is fully functional. Go through the upgrade steps on the target machine to perform the out-of-place upgrade using the UA. Although this process does add another big step to the overall upgrade process, it does make it possible to perform a cross-host upgrade to 11gR1 when necessary. The easiest approach would of course be to find a way of ensuring that the required hardware capacity for upgrade is available on the original 10g host. Using techniques such as scheduling the upgrade at low traffic times and/or temporarily stopping other processes running on the machine to clear up some memory might provide you the sufficient memory needed to perform the out-of-place upgrade and save you the need for using the backup/restore technique I have described in this post.

    Read the article

< Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >