Search Results

Search found 29495 results on 1180 pages for 'cross site scripting'.

Page 799/1180 | < Previous Page | 795 796 797 798 799 800 801 802 803 804 805 806  | Next Page >

  • Nginx access log shows authenticated user "admin"

    - by bearcat
    I came across a line in my Nginx access log: 218.201.121.99 - admin [12/Dec/2012:18:33:18 +0800] "GET /manager/html HTTP/1.1" 444 0 "-" "-" Let me stress that there is only 1 record with this IP. Notice the authenticated user admin. After some googling, I was able to find out only that this is authenticated user (http://wiki.nginx.org/HttpCoreModule#.24remote_user), which was authenticated by the Auth Basic Module (http://wiki.nginx.org/HttpAuthBasicModule). However, nowhere in my site (configuration) do I use HTTP basic authentication. What is going on? How did it get there? Was the user authenticated?

    Read the article

  • Apache Not Accepting a Path in My Home Folder

    - by Promather
    I have trying to set up an Apache site to use a folder in my home folder without any success. I exactly followed the steps in this page: https://help.ubuntu.com/community/ApacheMySQLPHP yet I did not succeed; I keep getting error 403, which says that the server doesn't have permission to access the requested page. I searched forums and many suggested changing the permission of the folder. I went straight away and set the permission to 777, but that didn't solve the problem. I made another search and somebody gave me a clue, which is that it could be because my home folder is encrypted. I believe this could be the problem, but: What is the relation between encryption and Apache? I suppose Apache server is requesting the file from the system, rather than trying to access the file bytes! Is there anyway to solve this problem? I don't want to move the folder to /var/www because I am using this Apache for testing, so I want whatever change I make to be immediately reflected, rather than having to copy files which is error prone.

    Read the article

  • Using Lighttpd: apache proxy or direct connection?

    - by Halfgaar
    Hi, I'm optimizing a site by using lighttpd for the static media. I've found that a recommended solution is to use Apache Proxy to point to the lighttpd server. But, does that use up an Apache thread/process per request? In my setup, I've noticed that all my processes are used up, even though they aren't doing anything, CPU wise. To free up apache processes, I've configured lighttpd and the amount of processes needed is lowered significantly, Munin shows. However, I've set it up to connect directly to lighty, to prevent apache workers from being occupied by serving static media. My question is: when using Apache Proxy, does that also use up a process/worker per request?

    Read the article

  • SQL Server Transaction Marks: Restoring multiple databases to a common relative point

    - by Mladen Prajdic
    We’re all familiar with the ability to restore a database to point in time using the RESTORE WITH STOPAT statement. But what if we have multiple databases that are accessed from one application or are modifying each other? And over multiple instances? And all databases have different workloads? And we want to restore all of the databases to some known common relative point? The catch here is that this common relative point isn’t the same point in time for all databases. This common relative point in time might be now in DB1, now-1 hour in DB2 and yesterday in DB3. And we don’t know the exact times. Let me introduce you to Transaction Marks. When we run a marked transaction using the WITH MARK option a flag is set in the transaction log and a row is added to msdb..logmarkhistory table. When restoring a transaction log backup we can restore to either before or after that marked transaction. The best thing is that we don’t even need to have one database modifying another database. All we have to do is use a marked transaction with the same name in different database. Let’s see how this works with an example. The code comments say what’s going on. USE master GOCREATE DATABASE TestTxMark1GOUSE TestTxMark1GOCREATE TABLE TestTable1( ID INT, VALUE UNIQUEIDENTIFIER) -- insert some data into the table so we can have a starting pointINSERT INTO TestTable1SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NULLFROM master..spt_valuesORDER BY RNSELECT *FROM TestTable1GO-- TAKE A FULL BACKUP of the databseBACKUP DATABASE TestTxMark1 TO DISK = 'c:\TestTxMark1.bak'GO USE master GOCREATE DATABASE TestTxMark2GOUSE TestTxMark2GOCREATE TABLE TestTable2( ID INT, VALUE UNIQUEIDENTIFIER)-- insert some data into the table so we can have a starting pointINSERT INTO TestTable2SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NEWID()FROM master..spt_valuesORDER BY RNSELECT *FROM TestTable2GO-- TAKE A FULL BACKUP of our databseBACKUP DATABASE TestTxMark2 TO DISK = 'c:\TestTxMark2.bak'GO -- start a marked transaction that modifies both databasesBEGIN TRAN TxDb WITH MARK -- update values from NULL to random value UPDATE TestTable1 SET VALUE = NEWID(); -- update first 100 values from random value -- to NULL in different DB UPDATE TestTxMark2.dbo.TestTable2 SET VALUE = NULL WHERE ID <= 100;COMMITGO     -- some time goes by here -- with various database activity... -- We see two entries for marks in each database. -- This is just informational and has no bearing on the restore itself.SELECT * FROM msdb..logmarkhistory USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark1 TO DISK = 'c:\TestTxMark1.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark1GO USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark2 TO DISK = 'c:\TestTxMark2.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark2GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark1 FROM DISK = 'c:\TestTxMark1.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark1 FROM DISK = 'c:\TestTxMark1.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark2 FROM DISK = 'c:\TestTxMark2.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark2 FROM DISK = 'c:\TestTxMark2.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO USE TestTxMark1-- we restored to time before the transaction -- so we have NULL values in our tableSELECT * FROM TestTable1 USE TestTxMark2-- we restored to time before the transaction -- so we DON'T have NULL values in our tableSELECT * FROM TestTable2   Transaction marks can be used like a crude sync mechanism for cross database operations. With them we can mark our databases with a common “restore to” point so we know we have a valid state between all databases to restore to.

    Read the article

  • Firefox 12's hardware acceleration on Ubuntu 12.04 LTS

    - by user64943
    After I have installed Ubuntu 12.04 LTS (32 bits), Firefox 12 works fine but without hardware acceleration. Needless to say I have the latest nVidia's proprietary drivers installed and my Firefox Preferences, on "Advanced" tab, "Browsing" section, have the option "Use hardware acceleration when available" checked. I have tried the following things before asking this question: Creating a boolean key "webgl.force-enabled" and set it to true on Firefox's page about:config; Starting a new profile like commented on thread Mozilla Firefox 12 is very slow on Ubuntu 12.04 LTS; I have updated my nVidia driver to version 295.53. And none of this have worked. As you can see below in my Firefox's page about:support report, "Graphics" section shows no "GPU Accelerated Windows": Adapter Description NVIDIA Corporation -- GeForce GTX 460/PCIe/SSE2 Vendor ID NVIDIA Corporation Device ID GeForce GTX 460/PCIe/SSE2 Driver Version 4.2.0 NVIDIA 295.53 WebGL Renderer NVIDIA Corporation -- GeForce GTX 460/PCIe/SSE2 -- 4.2.0 NVIDIA 295.53 GPU Accelerated Windows 0 AzureBackend skia I use the following site to test hardware acceleration: http://ie.microsoft.com/testdrive/Performance/FishBowl/ On Windows 7 I get 60 fps even with 1,750 fishes on browser's Full Screen Mode (1680x1050x32bit-color). On Ubuntu 12.04 LTS, same nVidia drivers (as shown in report), won't go faster than 15 fps with only 1,000 fishes. Can anybody help me? Best Regards,

    Read the article

  • How Can I Point My Local Testing Server at My GitHub Repository?

    - by Goober
    Up until a few days ago, I had a particular setup that was as follows. Using SVN, all of the websites that I developed were committed to a source control drop box on a local testing server. Then using IIS, a new website was set up to point at the last revision of each particular website I developed and display it to the outside world using a specific URL. I have just moved over to using git and github, meaning all of my source controlled code is now no longer stored on a local testing server. As a result of this, I am not sure how I can go about doing a similar thing to what I did with the SVN setup, however I need to be able to essentially have that same setup again, just using Git. So basically, how can I go about getting my local testing server to point at the GitHub repository for that site? Help greatly appreciated.

    Read the article

  • How to install compat-drivers or compat-wireless

    - by Sasho
    Could someone please explain me in detail how to install some of this drivers. I am running Ubuntu 12.04.2 and have the infamous (as I see everywhere in the net) problem with Atheros AR9462 wifi card. I tried everything to fix it and compat drivers are the only solution that I haven't tried. I tried to install some random package from kernel.org site and it couldn't make the driver (it threw some error). Then I updated the kernel to 3.10-rc7 and downloaded the last release of compat drivers and again the same problem occured. I reinstalled Ubuntu 12.04.2 and now I am using the 3.5 kernel because I don't know if rc7 is stable version. So my question is which compat wireless or compat drivers to download for this kernel and what is the process of installing. I tried with some command from the repository and it returned that it's not found. PS I am new to Ubuntu and Linux in general so explaining at length the install process and which driver I should install would be appreciated.

    Read the article

  • Apache 2 Symbolic link not allowed or link target not accessible

    - by djechelon
    While the title of this question matches an already asked question, in my case I already set Options +FollowSymLinks. The setup is the following: my hosting setup includes htdocs/ directory that is the default document root for HTTP websites and htdocs-secure that is for HTTPS. They are meant for sites that need a different HTTPS version. In case both share the same files I create a link from htdocs-secure to htdocs by ln -s htdocs htdocs-secure but here comes the problem! Log still says Symbolic link not allowed or link target not accessible: /path/to/htdocs-secure Vhost fragment Header always set Strict-Transport-Security "max-age=500" DocumentRoot /path/to/htdocs-secure <Directory "/path/to/htdocs-secure"> allow from all Options +FollowSymLinks </Directory> I think it's a correct setup. The HTTP version of the site is accessible, so it doesn't look like a permission problem. How to fix this? [Add] other info: I use MPM-itk and I set AssignUserId to the owner/group of both the directories

    Read the article

  • XAMPP - Unable to serve files larger than ~30MB [on hold]

    - by Sparx401
    I'm developing a site locally with XAMPP on Windows 7, and as far as media is concerned, I'm unable to play media files that are larger than 30MB or so. Both video and audio files (MP4 and MP3 respectively) generate this error in Chrome (and show similar errors in other browsers such as IE9 and Opera): No data received Unable to load the webpage because the server sent no data. Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data. It seems that the exact number of MB somewhat varies between browsers though. One video in question is 34MB and actually plays in Opera and IE9, but gives the aforementioned error in Chrome. I've checked to make sure the file paths were typed correctly and ensured that the directive for .htaccess is there to serve MP4s: AddType video/mp4 mp4 Also, I have these directives set as well in the same .htaccess file: php_value upload_max_filesize "80M" php_value post_max_size "80M" php_value max_input_time 60 php_value max_execution_time 60 And memory_limit is set in php.ini as "128M" so I'm left wondering: what is causing my files to not play, and what, if any, directives I have to change on the server-side? Perhaps something to do with limitations with the GET method (the method I'm seeing on Chrome's network tab among other header request/response info)?

    Read the article

  • fail2ban with Cloudflare

    - by tatersalad58
    I'm using fail2ban to block web vulnerability scanners. It is working correctly when visiting the site if CloudFlare is bypassed, but a user can still access it if going through it. I have mod_cloudflare installed. Is it possible to block users with IPtables when using Cloudflare? Ubuntu Server 12.04 32-bit Access.log: 112.64.89.231 - - [29/Aug/2012:19:16:01 -0500] "GET /muieblackcat HTTP/1.1" 404 469 "-" "-" Jail.conf [apache-probe] enabled = true port = http,https filter = apache-probe logpath = /var/log/apache2/access.log action = iptables-multiport[name=apache-probe, port="http,https", protocol=tcp] maxretry = 1 bantime = 30 # Test Apache-probe.conf [Definition] failregex = ^<HOST>.*"GET \/muieblackcat HTTP\/1\.1".* ignoreregex =

    Read the article

  • CentOS, YUM Errors?

    - by mike
    Hi, I am using a Media Temple DV server with CentOS upon trying to install ImageMagick via yum, I get the following error: There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: /usr/lib/python2.4/site-packages/rpm/_rpmmodule.so: undefined symbol: rpmdbCheckTerminate Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.4.3 (#1, May 24 2008, 13:47:28) [GCC 4.1.2 20070626 (Red Hat 4.1.2-14)] Can anyone shed some light on what I might be able to do to fix this? Thanks!

    Read the article

  • IIS 7.x Application Pool Best Practices

    - by Eric
    We are about to deploy a bunch of sites to some new servers. I have the following questions about application pools: 1) It seems advisable to have an application pool per website. Are there any caveats to this approach? Can one application pool, for example, hog all the CPU, Memory, Etc...? 2) When should you allow multiple worker processes in an application pool. When should you not? 3) Can private memory limit be used to prevent one application pool from interfering with another? Will setting it too low cause valid requests to recycle the application pool without getting a valid response? 4) What is the difference between private and virtual memory limits? 5) Are there compelling reasons NOT to run one application pool per site? Thanks!

    Read the article

  • Sync iPhone Mail with Webmail

    - by João Paulin
    I had an email account [email protected] hosted on Host A. This mailbox had 100 messages. I wanted to migrate to Host B, so I download all the 100 messages from Host A on my iPhone. Now that my site was successfully migrated to Host B and the email account [email protected] was created again (the mailbox is empty), how can I send the messages that I have downloaded on my iPhone to the mailbox on Host B? Note that the migration from Host A to Host B did not change the IMAP and SMTP adressess and parameters. I'm still using the same addresses, parameters and ports as before. The email accounts just switched hosting.

    Read the article

  • Is anyone else using OpenBSD as a router in the enterprise? What hardware are you running it on?

    - by Kamil Kisiel
    We have an OpenBSD router at each of our locations, currently running on generic "homebrew" PC hardware in a 4U server case. Due to reliability concerns and space considerations we're looking at upgrading them to some proper server-grade hardware with support etc. These boxes serve as the routers, gateways, and firewalls at each site. At this point we're quite familiar with OpenBSD and Pf, so hesitant at moving away from the system to something else such as dedicated Cisco hardware. I'm currently thinking of moving the systems to some HP DL-series 1U machines (model yet to be determined). I'm curious to hear if other people use a setup like this in their business, or have migrated to or away from one.

    Read the article

  • Does Google submit HTML forms?

    - by Saeed Neamati
    I have a web page, say http://domain/purchase and in this page, I have a web form. User, on submitting this form (which has validation, both client-side and server side and won't be validated until fields are filled appropriately), would be redirected to another page, where (s)he can choose other things, and specify other settings and then purchase our product. Say the second page is http://domain/options. So, user comes to our site and visits http://domain/purchase, fills the form, submits it, and then would be redirected to the second page, http://doamin/options?parameter1=value1&parameter2=value2, which contains parameters from the first page. This is very common in passing parameters between web pages (or technically, between URLs). Now I was reviewing my website, and saw that Google had indexed some of my redirected web pages and URLs, like: http://domain/options?parameter1=value1&parameter2=value2 http://domain/options?parameter1=value3&parameter2=value4 http://domain/options?parameter1=value5&parameter2=value6 http://domain/options?parameter1=value7&parameter2=value8 http://domain/options?parameter1=value9&parameter2=value10 This means that Google Bot has visited our http://domain/purchase page, and has filled our form, and has submitted it, and was being redirected to the other URL, with corresponding parameters. This is the only way that makes sense to me. Does Google really fills forms? PS: All parameters are meaningful, meaning that they are not filled arbitrarily. For example, the phone parameter in indexed pages has correct phone numbers. How is it possible?

    Read the article

  • BlueCoat reverse proxy NTLM authentication

    - by mathieu
    Currently when we want to access an internal site from Internet (IIS with NTLM auth), we have two login screens that appear : step1 : LDAPAuth, from the BlueCoat that check login/password validity against Active Directory step2 : NTLM auth, from our application. Is it possible to configure the reverse proxy to use the LDAP credentials provided at step1, and give them to whatever application that requests them ? Of course, if those credentials aren't valid, nothing happens. We're using BlueCoat SG400. Update : we're not looking for SSO where the user doesn't have to enter a password. We want the user to enter his domain credentials in the LDAPAuth dialog box, and the proxy to reuse it to authenticate against our application. Or any application that uses NTLM. We've only got 1 AD domain behind the reverse proxy.

    Read the article

  • Video acceleration problem with Windows 7 games and PPTX files

    - by Jordan 1GT
    I have a Dell xps M1330 which originally ran Vista, but I upgraded to Windows 7. When I try to run a Win 7 game like spider solitaire I receive the following message: The game is running in software rendering mode. Hardware acceleration is either disabled or not supported by your video card driver which could slow down game performance. Make sure you have the latest video card driver installed and that hardware acceleration is turned on. I confirmed that hardware acceleration is turned on. When I go to Dell's site, I'm told there is no later video driver. When I run the game it runs very choppy. I have a .pptx file which is doing strange things in normal view and I suspect it may be related to the same video acceleration problem.

    Read the article

  • Does /NOCANDY avoid any adware-related activities with OpenCandy?

    - by Andrew Grimm
    OpenCandy claims that using the /NOCANDY switch when using a OpenCandy-affiliated installer allows you to avoid opencandy. Should I take their word for it? If not, can anyone independent of OpenCandy and their affiliates verify that /NOCANDY works? Background: About to install WinSCP onto a fresh Windows installation, and found out that new versions have OpenCandy associated with their installer. For the sake of balance, here's a link to WinSCP's FAQ on OpenCandy. The claim about /NOCANDY working appears on WinSCP's web site, but the same boilerplate appears on other OpenCandy web sites. If the OpenCandy people are offended by the tag "spyare": sorry, but it's the main tag here, rather than "adware".

    Read the article

  • How to Search for (and Find) Solaris Docs

    - by rickramsey
    Just the other day, I went to the recently-released Oracle Solaris 11 library to search for information about the print service changes. I knew there had been changes in Oracle Solaris 11, but could not remember the new approach to printing. So, being the optimist that I (never) am, I went to the Oracle Solaris 11 Information Library on docs.oracle.com and typed "print service" into the search box. Imagine my surprise when the response back was: We did not find any search results for: print service site:download.oracle.com url:/docs/cd/E23824_01. OMG! WTF? Are you kidding me? After throwing a few stuffed animals at my computer screen, I tried again. Is search broken? Well, sort of (and I'm trying to get it fixed). In the meantime, however, there is a reasonably simple user workaround. Possibly unnoticed by most people, there is a Within drop-down menu on the Oracle search results page. If you simply open the Within menu, select Documentation, and click the little magnifying glass again, you (should) get the expected results. Is it perfect? No, but at least it's an improvement over being completely broken. - Janice Critchlow, Information Architect, Systems Website Newsletter Facebook Twitter

    Read the article

  • Increase text size in Ubuntu 10.04 due to having large resolution/monitors

    - by Sridhar Ratnakumar
    I have 24" dual monitors with 1920x1080 resolution on both of them. Consequently the text appears so small. I use the following text-intensive applications frequently: Web browser (Google Chrome) IDE (Komodo) Terminal (Gnome Terminal) Email (Thunderbird) I can configure text size on IDE, Terminal and Email. But for Chrome, it is not a good idea to set proportional font size because often one wants to see the entire (not just proportional fonts) site to be zoomed. So I am asking: Is it possible to increase DPI in Ubuntu (much like on Windows) so as to increase the text size across all apps? OR Is it possible to set permanent 'zoom' in Google Chrome, using a third-party extension maybe? I am using Ubuntu 10.04 (Lucid Lynx)

    Read the article

  • How to use instances with s3 load balancing?

    - by Slay
    I have some questions about instances and load balancing in amazon s3. I can configured an instances in s3, but i do not understand how to deal with many instances in s3. Currently, my instance is loaded with mysql, php etc (ALL IN ONE). However how do i ensure my instances is scaling? E.g if i have a site that is suppose to be handled using 3 instances and using amazon rds. Do i need to host my code base in the 3 instances? how do people normally do this? like facebook has 1000+ servers. Do they host their code base in all the 1000+ servers? Thanks

    Read the article

  • How to use Nintex Reusable Workflow Template

    - by ybbest
    If you like to re-use your workflow logic over more than one list or library, you can create reusable workflow template. Here are the steps 1. Go to site settings and create reusable workflow template. 2. Select the content type you like the template to bound to and give a workflow a title. 3.Create your workflow the same way as you did for a list workflow and publish your workfow. 4. Finally, you need add your workflow to the list you like to run your workflow. 5. Go to workflow settings and add a Workflow. 6. Select the content type and configure the workflow as below 7. After you done this, your workflow will run as usual. Note: 1. You cannot conditionally start your workflow. 2. Your workflow is not automatically bound to the list when you add the content type to the list, you need to configure it manually as shown in step 4-6.

    Read the article

  • Varnish / Apache redirecting to backend port 8080

    - by deko
    I'm running Varnish 2 with Apache backend at 8080 on the same machine. Everything is working fine, except one problem: Sometimes Apache(?) is redirecting to backend port :8080 especially when I'm using htaccess. Users are displayed the 8080 port in the URL and Google is crawling my site on the backend port as well, which is not desirable. I want Apache 8080 to be accessible only to Varnish on localhost, and not to redirect or display the backend port. What would be a quick way to prevent users being directed to 8080 and search engines denied crawling the backend? Here is an example htaccess line: redirect /promotion /register.php?promotion=june which causes www.domain.com/promotion to redirect to www.domain.com:8080/register.php?promotion=june

    Read the article

  • Oracle celebrates a successful Oracle CloudWorld in Bogotá

    - by yaldahhakim
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 written by: Diana Tamayo Tovar Oracle CloudWorld Bogotá began with scattered showers, rain and strong winds, inviting Colombians to spend a whole day in the social, mobile and complete world of Oracle Cloud. The event took place on November 6th with 807 attendees, 15 media representatives and 65 partners, who gathered to share the business value of Cloud along with Oracle executives and Colombian market leaders. Line-of-business leaders in sales and marketing, customer service and support, HR and talent management, and finance and operations, shared their ideas with Colombian customers, giving them a chance to learn, discover and engage with the tools, trends and concepts of Cloud. The highlights of the event included the presence of keynote speakers such as Bob Evans, Chief Communications Officer, and a customer testimonial session with top business leaders from insurance, finances, retail, communications and health Colombian industries, who shared their innovation experiences and success stories on workforce empowerment, talent management, cloud security, social engagement and productivity, providing best case scenarios on how Oracle has helped them transform their business with technologies like cloud, social collaboration and mobile applications. The keynote session was preceded by a customer success story from one of the largest virtual network operator in the country, providing an interesting case study of mobile banking innovation and a great customer testimonial of the importance of cross industry strategies and cloud technology. The event provided five different tracks on the main trends of how companies communicate and engage with different audiences, providing a different perspective on the importance of empowering brands through their customers, trusting and investing in technology for growth, while Oracle University shared their knowledge with “Oracle Cloud Fundamentals” a training lesson regarding Java Cloud, Database Cloud and other Oracle Cloud product technologies and solutions. The rainy day scenario included sideshows of aerial acrobatics and speed painting performances to recreate the environment of modern and flexible Cloud Solutions in a colorful and creative way. Oracle CloudWorld Bogotá was a great opportunity to expose invalid cloud Myths and the main concerns of the Colombian customers towards cloud, considering IDC Latin America studies stating that 93% of Colombian business leaders are interested in cloud but only 47% understand its business value. Spending a day in the cloud with 6 demogrounds stations, conference sessions, interesting case studies and customer testimonials will surely widen the endless market opportunities for Colombian customers, leaving them amazed with how Oracle Cloud works towards integration with other environments, non oracle applications, social media and mobile devices with bulletproof security infrastructure. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Sending SPAM free mail through my website

    - by Sara
    Hi, I've been battling with this issue for couple of months. I need to send bulk mail (not spam) through my social network to users in situations like newsletters, site invitations (when user imports their address book contacts) I'm using shared hosting and it limits 500 mails per hour. Even though i manage to send mails most of them end up in user's spam box. After researching these are the solutions that i finally came up with. 1) Use Google Apps SMTP (http://www.google.com/apps/intl/en/business/features.html) 2) Move into VPS 3) Use shared hosting with throttle enabled Please advise me on what to choose. Will using Google Apps prevent mail being sent as spam? I can't use other 3rd party SMTP like iContact or Aweber as "invitation sending script" will send emails to thousands of contacts, depending on user's addressbook. Thanks in advance

    Read the article

< Previous Page | 795 796 797 798 799 800 801 802 803 804 805 806  | Next Page >