Search Results

Search found 25493 results on 1020 pages for 'custom wordpress pages'.

Page 534/1020 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • MySQL: Best of Breed Database

    - by Bertrand Matthelié
    Oracle offers best of breed technology at every layer of the stack, from servers and storage to applications. Discover why MySQL is a best of breed database solution for: Web-based applications, including the next generation of highly demanding web, cloud, mobile and social application Distributed applications requiring a powerful and reliable embedded database Custom and departmental enterprise applications on Windows and other platforms Check out our Resource Center to get access to white papers and other resources. And, remember to register for MySQL Connect if you haven’t done so yet. You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • Tricky mod_rewrite challenge

    - by And Finally
    I list about 9,000 records on my little site. At the moment I'm showing them with a dynamic page, like http://domain.com/records.php?id=019031 But I'd like to start using meaningful URLs like this one on Amazon http://www.amazon.co.uk/Library-Mythology-Oxford-Worlds-Classics/dp/0199536325 where the title string on the root level gets ignored and requests are redirected to the records.php page, which accepts the ID as usual. Does anybody know how I could achieve that with mod_rewrite? I'm wondering how I'd deal with requests to my other root-level pages, like http://domain.com/contact.php, that I don't want to redirect to the records page.

    Read the article

  • Safest way (i.e. HTTPS, POST, PGP) to send decryption keys through the web?

    - by theGreenCabbage
    I am in the final stages of development for my Revit plugin. This plugin is programmed in C#, and distributed via a DLL. One of the DLLs is an encrypted SQLite database (with proprietary data) that is in the form of a DLL. Currently, in development stages, the decryption key for the SQLite database is hardcoded in my main DLL (the program's DLL). For distribution, since DLLs are easily decompilable, I am in need of a new method to decrypt the DLL. My solution is to send our decryption keys from our servers securely to the host's computer. I was looking in POST, thinking it was more secure than GET, but upon research, it appears it's similarly insecure, only more "obscure" than GET. I also looked into HTTPS, but Hostgator requires extra money for HTTPS use. I am in need of some advice - are there any custom solutions I can do to implement this?

    Read the article

  • Where would you start if you were trying to solve this PDF classification problem?

    - by burtonic
    We are crawling and downloading lots of companies' PDFs and trying to pick out the ones that are Annual Reports. Such reports can be downloaded from most companies' investor-relations pages. The PDFs are scanned and the database is populated with, among other things, the: Title Contents (full text) Page count Word count Orientation First line Using this data we are checking for the obvious phrases such as: Annual report Financial statement Quarterly report Interim report Then recording the frequency of these phrases and others. So far we have around 350,000 PDFs to scan and a training set of 4,000 documents that have been manually classified as either a report or not. We are experimenting with a number of different approaches including Bayesian classifiers and weighting the different factors available. We are building the classifier in Ruby. My question is: if you were thinking about this problem, where would you start?

    Read the article

  • Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0!

    Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0! Ask and vote for questions at: bit.ly mod_pagespeed is an open-source Apache module that automatically optimizes web pages and resources on them: images, CSS, JavaScript, and much more. In this episode, we'll catch up with Joshua Marantz, the tech lead of the project at Google and talk about the history of mod_pagespeed, its fast growing adoption (130K+ sites!), technical architecture and how it works under the hood. Finally, we'll talk about the upcoming 1.0 release milestone for the project. If you're curious about mod_pagespeed, then this is definitely the show you won't want to miss! From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Flowmotion Running on Ubuntu Server [migrated]

    - by Thomas Egan
    I am trying to configure Flowmotion to work on my Ubuntu Server. At present I use LAMP to serve pages from a Virtualbox installation. I will be moving this to a dedicated server but would like to enable true streaming of videos using this installation. I am only interested in open source streaming for a research project and although I have installed Flowmotion via apt-get I don't know how to start the service so that embedded videos located on the server will stream. Can anybody provide any information regarding this or online resources I may have missed? I have checked the documentation however if appears far too complex. Just clarify I'm running VirtualBox 4.2.1 on Mac OSX 10.6.8 and Ubuntu Server 12.06 64-bit

    Read the article

  • How to create website shortcuts on the desktop / in a folder using Chrome?

    - by it's me
    Help with something really basic, which I am unable to figure out. In Windows creating a shortcut (link) for a website is as easy as dragging-and-dropping the favicon/address bar to the desktop or a folder. I tried the same in Ubuntu (Chrome browser), but it's not working. The web page is being saved as a file, but not as a link/shortcut. Am I missing something or is there no way to quickly create shortcuts to web pages/web sites without installing some app for that? If the above is true, is there an app that does what I need? I hope I am clear enough.

    Read the article

  • Website with over 1 million posts with not much textual content

    - by Far Se
    I've made a website which crawls files from all over the Internet and I feel like Google will ban me if I sent it sitemaps which contain all of these pages (1m+), because they contain only the file name/size/no of downloads and the download link(s). I'm considering this thought because I've made another website like this in the past and Google banned me after one week with the reason: "spam", even it was not (maybe somebody falsely reported me?!). Does someone have an idea about how to keep Google form banning my website? I've seen several other sites like mine and they don't get banned or... anything. And also, should I sent the sitemap or wait until Google indexes every page as it finds them? Thanks in advance :)

    Read the article

  • Game Server on Windows Azure

    - by MrWiggels
    What do you guys think of using Windows Azure for deploying a custom built game server. It's being built in C#, and I want to get a few things down before stretching too far into the project. I like the idea of being scalable, but I also know that I will never get to the scale of anything to the scale of WOW, or something quite as big. It will just be an interesting journey to test something like this. So, will Windows Azure work for that, or is there any other services that can provide something like that?

    Read the article

  • Sortie officielle de WebMatrix, le nouvel outil de développement Web de Microsoft pour les débutants ou les petites entreprises

    Sortie officielle de WebMatrix Nouvel outil de développement web de Microsoft pour les débutants ou les petites entreprises Mise à jour du 14/01/11 Comme nous l'avions prévu hier (lire ci-dessous) WebMatrix, le nouvel IDE de Microsoft est sorti. WebMatrix est un outil tout-en-un destiné à tous les développeurs mais particulièrement aux étudiants, ou ou aux personnes cherchant un moyen simple et rapide de créer un site Web. Il comprend IIS Express (serveur de développement web), ASP.NET Web Pages (technologie de développement web), et SQL Server Compact (base de données embarquée). « WebMatrix démocratise la plateforme web en...

    Read the article

  • Are these interview questions too difficult for entry-level C++ positions?

    - by Banana
    I recently had a few interviews for programming jobs within the financial industry. I am looking for entry-level positions as I specify in the cover letter. However I am usually asked questions such as: - all two-letters commands you know in unix - representation of float/double numbers (ieee standard) - segmentation fault memory dump, and related issues - all functions you know to convert string to integer (not just atoi) - how to avoid virtual tables - etc.. Is that the custom? Because I don't think this kind of questions make sense for someone willing to get an entry-level job. Is it totally crazy to think that they should ask more conceptual questions? This is beginning to driving me nuts, honestly. Thanks

    Read the article

  • Will keep google traffic on new site from old site when moving content from old site? [closed]

    - by user1324762
    Possible Duplicate: new domain, old links are 301’d from old domain to new, how will this affect my rankings? I have a site about bikers. Now I created a dating site for bikers. I don't need old site any more, I want to move all articles to this new dating site. So basically, this is not only moving content to new domain, but also to entire new site. What I am planning to do is to make 301 redirect for all 200 articles. For pages that are not articles, I will just put message that the site will be down soon. Do you think that I will get all google traffic from old site from those articles? Is there anything I should be aware and careful?

    Read the article

  • Does google see the output of document.write?

    - by merk
    I've got a site where people can list machinery for sale. Each item for sale has it's own dynamic page. On each of these pages we allow the person selling the item to have a link back to their own website. Some people only sell a handful of items and some people are selling dozens or hundreds of items. So in some cases we can have a 100 links back to their external site. Our SEO guy is saying this is bad (i'll open another question on that). So i was wondering if i take the links and spit them out using document.write, will that hide them from google and the other SE's ?

    Read the article

  • Convert Microsoft Word documents (.doc/x) into HTML files

    - by danie7L T
    Does anybody knows of a good application to get it done quickly and efficiently ? I bought Word Cleaner but the results are merely sufficient and I need go over all the generated html files to clean tons of useless injected tags like <strongH</strong<strongell</strong<strongo </strong<emWor</em<emld</em Most of the articles displayed on a website I manage are based on documents written on MS Word by people how has little idea of what are margins for or ordered/unordered lists, foot/end notes etc and I cannot make them use something else. Does anyone has a tip to help me handle those pages more efficiently than going over them to correct and apply my CSS style ? NB: Just for the record, using "Save as HTML DOC" in Word is faaar worst than Word cleaner

    Read the article

  • How to identify a PDF classification problem?

    - by burtonic
    We are crawling and downloading lots of companies' PDFs and trying to pick out the ones that are Annual Reports. Such reports can be downloaded from most companies' investor-relations pages. The PDFs are scanned and the database is populated with, among other things, the: Title Contents (full text) Page count Word count Orientation First line Using this data we are checking for the obvious phrases such as: Annual report Financial statement Quarterly report Interim report Then recording the frequency of these phrases and others. So far we have around 350,000 PDFs to scan and a training set of 4,000 documents that have been manually classified as either a report or not. We are experimenting with a number of different approaches including Bayesian classifiers and weighting the different factors available. We are building the classifier in Ruby. My question is: if you were thinking about this problem, where would you start?

    Read the article

  • Mouse cursor is MASSIVE inside of firefox and chromium

    - by user171396
    While installing ubuntu i accidently hit the high contrast option. I could not figure out how to diable it within the install, so i let it complete. I booted up into ubuntu 13.04 and HC was still on. I disabled it in Universal Access, and now am noticing my mouse curose is huge in web browsers. This is very much a stock install. Is there a setting to disable the HUGE mouse? I mean the thing is 4 times the size of text etc on normal pages. and its only in broswsers from what ive seen so far. EDIT Looks like its in everthing with text.. terminal, app store, folders and files... /sigh.

    Read the article

  • I cannot log in after theme change

    - by sssuizaaa
    After changing the gtk-theme in the gnome tweak took I was taken out of the session to the login screen and now I cannot log in. I can only log in using the guest account. So in the grub menu I selected the recovery mode and in the resulting menu I selected root-drop to root shell prompt. Once there I did a couple of things I've found in several pages and in the forums. 1.gsettings reset org.gnome.desktop.interface gtk-theme This is what I got: (process:642):WARNING: Command line 'dbus-launch –autolunch=4438d024dd45ef7fb2d3f4ab0000000f –binary-syntax --close-stderr' exited with non-zero exit status 1: Autolaunch error: X11 initialization failed.\n and nothing changes 2.gconftool-2 --type=string -s /desktop/gnome/interface/gtk_theme Radiance with this I was trying to change the gtk theme to the Radiance one. No strange message this time but it did not work either. I still cannot log in. Any ideas please?? sssuizaaa

    Read the article

  • How to fix “Error: This solution contains no resources scoped for a Web application and cannot be deployed to a particular Web application.”

    - by ybbest
    Problem: When I try to deploy my custom wsp solution to a specific web application, I got the error below: This solution contains no resources scoped for a Web application and cannot be deployed to a particular Web application. Analysis: The error message itself explains why you cannot deploy the solution to a web application. However if you do not like to deploy the solution to all the web applications and only like to deploy your solution to a specific application , you need to change the solution settings Assembly Deployment Target from GlobalAssemblyCache to WebApplication. From: TO: Solution: After you change the Assembly Deployment Target and run the script again, you will have the solution deployed successfully. References: http://blogs.msdn.com/b/jjameson/archive/2007/06/17/issues-deploying-sharepoint-solution-packages.aspx

    Read the article

  • Joomla url issue with sh404SEF

    - by user5858
    it's couple of months I've been using SH404SEF With my site. But in my site I'm getting url's in the form: http://www.downloadformsindia.com/Income-Tax-Forms/all-income-tax-return-itr-forms-2010-2011.html?task=view If I remove this suffix(?task=view), it takes us to the same page. I had raised this issue in SH404SEF forum, and I was told that this data is taken as parameter by search engines hence ignored. I want to redirect using RewriteMatch in .htaccess all such url's to the url's without ?task=view ones : ....downloadformsindia.com/Income-Tax-Forms/all-income-tax-return-itr-forms-2010-2011.html?task=view to be redirected to http://www.downloadformsindia.com/Income-Tax-Forms/all-income-tax-return-itr-forms-2010-2011.html So my question is: Will this redirection create 404's in the Google webmaster. I've thousand's of pages in the site

    Read the article

  • La free software foundation présente sa liste des projets open source prioritaires et invite les utilisateurs à contribuer à l'essor du libre

    Le libre a besoin de vous ! La free software foundation invite les utilisateurs à contribuer aux projets libresLe logiciel libre. On l'aime, on l'utilise, on le partage, ou encore on le commercialise. Cependant combien sont ceux qui pensent un temps soit peu à supporter le logiciel libre ? Combien évitent souvent le bouton « Donate » des pages officielles des projets comme « The GIMP » ? Combien sont-ils à consacrer un peu de leur temps pour la promotion du logiciel libre ? Dans la mentalité de...

    Read the article

  • Chrome window freezes in Ubuntu

    - by Dragon5689
    Sometimes, especially when I open pages that have some kind of multimedia contents, Chrome freezes. It always happens directly after opening a new tab. In contrast to the way Chrome usually has only tabs crashing, the entire windows freezes. If I have multiple separate Chrome windows open, the others keep working. I run Ubuntu 12.04 and Chrome in version 20.0.1132.47 but this has been going on since I last set up my machine around half a year ago. Anyone having the same problems or an idea what could be wrong here?

    Read the article

  • How can robots beat CAPTCHAs?

    - by totymedli
    I have a website e-mail form. I use a custom CAPTCHA to prevent spam from robots. Despite this, I still get spam. Why? How do robots beat the CAPTCHA? Do they use some kind of advanced OCR or just get the solution from where it is stored? How can I prevent this? Should I change to another type of CAPTCHA? I am sure the e-mails are coming from the form, because it is sent from my email-sender that serves the form messages. Also the letter style is the same. For the record, I am using PHP + MySQL, but I'm not searching for a solution to this problem. I was interested in the general situation how the robots beat these technologies. I just told this situation as an example, so you can understand better what I'm asking about.

    Read the article

  • Finding .desktop files based on their titles?

    - by stwissel
    That's part 2 of a question asked earlier (to be able to give credit to the answers individually). When I type into the Dash applications show up with their title (also when hovering over the launcher), how can I find the associated desktop file. When I look into the usual suspect locations (/usr/share/applications and ~/.local/share/applications) with Nautilus I see the titles, but not the file names (not even in properties which sucks). When I look from the command line I see the file names but not the titles (a switch would be nice). How can I get a listing (a custom column?) that shows them next to each other?

    Read the article

  • Continuing Education as a Part of Your Job [closed]

    - by Mike
    I work as a programmer for a mid-sized company (about 500 employees) in the medical industry. Before that I worked at a custom software development/consulting company. At both companies programmers were never officially given time to continue their education through taking classes, reading books or blogs, or doing research relevant to the job. At the software development company we were offered some money to pay for a class, but not offered any time off of work to take the class. I have been wondering, do most employers of programmers give time off of work to take a class, read a book, or do job related research? By time off of work I just mean some period of time where you can stop development; it does not have to mean leaving the office. I would be grateful to hear about everyone's experience with this.

    Read the article

  • Is doing AB Tests using site redirection a bad practice?

    - by user40358
    I'm developing hotels websites here in Brazil. When the site is done, we do an AB test with the old version to measure conversion and show to the hotel owner how good our site is. Due to the fact that I cannot put the old site inside the new one as a subresource (newone.com/old), currently I'm doing those AB test as follows: 1) I create 2 Google Analytics accounts, one for each site (old and new); 2) I put the GA tags in the old website pages (changing its possibly existent GA ID to the just created one); 3) I put an Javascript code that redirects the user to the old website (in a different URL and different domain) with 50% of probability. So I compare all the metrics, events and goals between those two GA accounts. How bad is it? How Google can interpretate the fact of being, sometimes redirected, sometimes don't? The experiment usually runs for 2 weeks. Is there any other alternative for doing this in a better way?

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >