Search Results

Search found 59326 results on 2374 pages for 'full text search'.

Page 121/2374 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • c# warn if text box is empty or contains a non-whole number

    - by Jamaul Smith
    In my specific case, I need the value in propertyPriceTextBox to be numeric only, and a whole number. A value also has to be entered, and I can just Messagebox.Show() a warning and that's all I'd need to do. This is what I have so far. private void computeButton_Click(object sender, EventArgs e) { decimal propertyPrice; if ((decimal.TryParse(propertyPriceTextBox.Text, out propertyPrice))) decimal.Parse(propertyPriceTextBox.Text); { if (residentialRadioButton.Checked == true) commisionLabel.Text = (residentialCom * propertyPrice).ToString("c"); if (commercialRadioButton.Checked == true) commisionLabel.Text = (commercialCom * propertyPrice).ToString("c"); if (hillsRadioButton.Checked == true) countySalesTaxTextBox.Text = ( hilssTax * propertyPrice).ToString("c"); if (pascoRadioButton.Checked == true) countySalesTaxTextBox.Text = (pascoTax * propertyPrice).ToString("c"); if (polkRadioButton.Checked == true) countySalesTaxTextBox.Text = (polkTax * propertyPrice).ToString("c"); decimal result; result = (countySalesTaxTextBox.Text + stateSalesTaxTextBox.Text + propertyPriceTextBox.Text + comissionTextBox.Text).ToString("c"); } else (.) MessageBox.Show("Property Price must be a whole number."); }

    Read the article

  • Is C# development effectively inseparable from the IDE you use?

    - by Ghopper21
    I'm a Python programmer learning C# who is trying to stop worrying and just love C# for what it is, rather than constantly comparing it back to Python. I'm really get caught up on one point: the lack of explicitness about where things are defined, as detailed in this Stack Overflow question. In short: in C#, using foo doesn't tell you what names from foo are being made available, which is analogous to from foo import * in Python -- a form that is discouraged within Python coding culture for being implicit rather than the more explicit approach of from foo import bar. I was rather struck by the Stack Overflow answers to this point from C# programmers, which was that in practice this lack of explicitness doesn't really matter because in your IDE (presumably Visual Studio) you can just hover over a name and be told by the system where the name is coming from. E.g.: Now, in theory I realise this means when you're looking with a text editor, you can't tell where the types come from in C#... but in practice, I don't find that to be a problem. How often are you actually looking at code and can't use Visual Studio? This is revelatory to me. Many Python programmers prefer a text editor approach to coding, using something like Sublime Text 2 or vim, where it's all about the code, plus command line tools and direct access and manipulation of folders and files. The idea of being dependent on an IDE to understand code at such a basic level seems anathema. It seems C# culture is radically different on this point. And I wonder if I just need to accept and embrace that as part of my learning of C#. Which leads me to my question here: is C# development effectively inseparable from the IDE you use?

    Read the article

  • Canonicalization issue regarding academic URL vs. blog URL

    - by user5395
    I'm sorry if what I am about to write is long-winded. I only wish to be clear. I am an academic in the scientific community. I maintain a web site for my research, teaching, and other professional activities. Until recently, the content for this site was hosted in a directory on my university department's own server. The address is of the typical form (universityname).edu/~(myusername) I decided that I wanted to use WordPress in order to host and manage my page. So I set up a WordPress.com blog and then replaced the index.html file in (universityname).edu/~(myusername) with a new one consisting of a single frame, containing the WordPress.com blog. Now when a user visits (universityname).edu/~(myusername), he or she sees the blog instead. This has been pretty nice because, even when the user clicks on links between pages or posts in the blog, the only thing showing up in the address bar of the browser is www.(universityname).edu/~(myusername), because the blog is constrained to a frame. However, the effect of this change on the search side of things has not been so kind to me. Before, when someone searched for my name in Google, the first result was always (universityname).edu/~(myusername). This is the most desirable outcome, for professional reasons. (Having my academic URL come up first suggests that I am an accredited professional, and not just some crank with a blog!) But now, Google seems to have canonicalized my web presence under the blog's WordPress.com address. It has completely forgotten about my academic URL and considers the WordPress.com address to be the best address representing me on the web. Unfortunately, WordPress.com doesn't support the canonical tag, so I can't tell the blog to advertise itself as my academic URL in the header. (It doesn't seem to help at all that I have used the WordPress.com dashboard to turn on no-indexing of the blog.) One obvious solution would be to use the departmental server to host my content again, and use a local installation of the WordPress platform. For reasons beyond my control, the platform will not be deployed on the departmental server at this time. Another solution would be to use shared hosting with WordPress.org support, because the WordPress.org platform does support the canonical tag (albeit via a plug-in). But this seems to usually require purchasing a domain name and other fees, and there is no guarantee that Google will listen to the canonical tag (it might use whatever domain name I end up with instead). Is there a way I can more cleverly integrate the WordPress.com blog into a page hosted on my department's server? Is there some PHP code I can write to retrieve the blog's contents in a way that Google won't treat as a link / "perceive" the blog? Please note: I am a PHP novice at best. I just feel there should be a simpler solution to all this, within the constraints of what I have described above. Thanks!

    Read the article

  • Using Solr and Zends Lucene port together...

    - by thebluefox
    Afternoon chaps, After my adventures with Zend-Lucene-Search, and discovering it isn't all its cracked up to be when indexing large datasets, I've turned to Solr (thanks to Bill Karwin for that :) ) I've got Solr indexing the db far far quicker now, taking just over 8 minutes to index a table of just over 1.7million rows - which I'm very pleased with. However, when I come to try and search the index with the Zend port, I run into the following error; Fatal error: Uncaught exception 'Zend_Search_Lucene_Exception' with message 'Unsupported segments file format' in /var/www/Zend/Search/Lucene.php:407 Stack trace: #0 /var/www/Zend/Search/Lucene.php(555): Zend_Search_Lucene-_readSegmentsFile() #1 /var/www/z_search.php(12): Zend_Search_Lucene-__construct('tmp/feeds_index') #2 {main} thrown in /var/www/Zend/Search/Lucene.php on line 407 I've tried to have a search around but can't seem to find anything about this problem, everyone just seems to be able to get them to work? Any help as always much appreciated :) Thanks, Tom

    Read the article

  • How to Fix this specific Google "Fetch as Googlebot" error appearing on my Webmaster Tools?

    - by UXdesigner
    Good day, I'm currently finding out why I have lost all of my website's rank in google. I don't even appear in google results by the domain. But other sites do link me and they appear in the google results. I think it's all about leaving my site two months alone and finding out I had 20k in comment spam, which I completely deleted and fixed with filters and adding a new Disqus comment service. Thing is, I added my site to Google Webmaster Tools and I'm finding out several awful things. For example, when I click in Google Fetch As GoogleBot. I receive this error message below in response to my request. And I don't even know what's the real problem and how to fix it. I simply don't get it. This is what appears: Date: Wednesday, July 20, 2011 9:43:35 AM PDT Googlebot Type: Web Download Time (in milliseconds): 55 HTTP/1.1 403 Forbidden Date: Wed, 20 Jul 2011 16:43:36 GMT Server: Apache Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 248 Keep-Alive: timeout=2, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 403 Forbidden Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. Do you guys know anything about this problem ? I need to have Google crawl my site again. I used to have a really nice google result in the past three years. Now, there's nothing. thanks,

    Read the article

  • My blog not even ranking for exact title match [on hold]

    - by Akshay Hallur
    I have original in detail blog posts related to blogging and SEO. This domain has been dropped (expired) 2 times before my acquisition. I am the 3rd owner of the domain since 143 days. Blog posts are not ranking even for exact titles. Google+ or LinkedIn shares will show up instead of my content.Some blog posts are not even indexed. I am hardly getting around 7 organic visits / day. Example 1 : http://www.infoflame.com/offer-pdf-of-blog-posts-for-likes-and-shares/    Title: Offer Readers PDF of Blog Posts for Their Likes and Shares not indexed at all.  Example 2 : http://www.infoflame.com/anchor-text-for-seo/    is indexed but not coming up for the exact title. Suspect: Dropped domain, less likely used for spam( WayBack machine (2 drops) 3 captures since 2004, I don't know whether there was Email spam) (But no manual actions in WMT, so no reconsideration request). What's the reason for this? Should I wait? How can I tell Google that ownership is changed and the domain is now spam-free? or should I de-index it and start a new blog? Thank you, for any advises.

    Read the article

  • Algorithms to find longest common prefix in a sliding window.

    - by nn
    Hi, I have written a Lempel Ziv compressor and decompressor. I am seeking to improve the time to search the dictionary for a phrase. I have considered K-M-P and Boyer-Moore, but I think an algorithm that adapts to changes in the dictionary would be faster. I've been reading that binary search trees (AVL or with splays) improve the performance of compression time considerably. What I fail to understand is how to bootstrap the binary search tree and insert/remove data. I'm not actually quite sure the significance of each node in the binary search. I am searching for phrases so will each character be considered a node? Also how and what is inserted/removed from the search tree as new data enters the dictionary and old data is removed? The binary search tree sounds like a good payoff since it can adapt to the dictionary, but I'm just not quite sure of how it's used.

    Read the article

  • My approach towards SEO implementation needs improvements. [closed]

    - by Eritrea
    I have always copy/paste this below code as a template for meta tags on project, but I think they are not effective as they could possible be. So, I need to know if there is anyway I can improve it. suppose I have a site called coop.com for a company called Coopm and we do import and export as a business in France. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8" /> <meta name="rpbots" content="index, follow" /> <meta name="description" content="ccop is a major import and export company" /> <meta name="keywords" content="coop, coop.com coop company, import export, import export france, " /> <meta name='REVISIT-AFTER' content='30 DAYS'> <title>coop is an import and export company located in France</title> </head> The reason I am asking, is because I want to know if there are better ways of constructing your SEO tags, and construction.

    Read the article

  • Changing the Settings item description text color in Android?

    - by Cori
    I use a Samsung Vibrant, and one of the annoying bits that Samsung tossed on their TouchWiz skin was changing the color of the Settings menu item description text from white (AOSP) to light blue. This looks okay as long as the skin is bluey-themed, but most ROMs seem to be leaning toward Gingerbread clones... doesn't look good with the green. How can I change that settings description font color back to white, or even the orange it is in actual Android 2.3? Which xml file is the color property located in? It seems to also spread to all apps you install, too... the blue text.

    Read the article

  • what is a fast way to output h5py dataset to text?

    - by user362761
    I am using the h5py python package to read files in HDF5 format. (e.g. somefile.h5) I would like to write the contents of a dataset to a text file. For example, I would like to create a text file with the following contents: 1,20,31,75,142,324,78,12,3,90,8,21,1 I am able to access the dataset in python using this code: import h5py f = h5py.File('/Users/Me/Desktop/thefile.h5', 'r') group = f['/level1/level2/level3'] dset = group['dsetname'] My naive approach is too slow, because my dataset has over 20000 entries: # write all values to file for index in range(len(dset)): # do not add comma after last value if index == len(dset)-1: txtfile.write(repr(dset[index])) else: txtfile.write(repr(dset[index])+',') txtfile.close() return None Is there a faster way to write this to a file? Perhaps I could convert the dataset into a NumPy array or even a Python list, and then use some file-writing tool? (I could experiment with concatenating the values into a larger string before writing to file, but I'm hoping there's something entirely more elegant)

    Read the article

  • Will we be penalized for having multiple external links to the same site?

    - by merk
    There seem to be conflicting answers on this question. The most relevant ones seem to be at least a year or two old, so I thought it would be worth re-asking this question. My gut says it's ok, because there are plenty of sites out there that do this already. Every major retailer site usually has links to the manufacturer of whatever item they are selling. go to www.newegg.com and they have hundreds of links to the same site since they sell multiple items from the same brand. Our site allows people to list a specific genre of items for sale (not porn - i'm just keeping it generic since I'm not trying to advertise) and on each item listing page, we have a link back to their website if they want. Our SEO guy is saying this is really bad and google is going to treat us as a link farm. My gut says when we have to start limiting user useful features to our site to boost our ranking, then something is wrong. Or start jumping through hoops by trying to hide text using javascript etc Some clients are only selling 1 to a handful of items, while a couple of our bigger clients have hundreds of items listed so will have hundreds of pages that link back to their site. I should also mention, there will be a handful of pages with the bigger clients where it may appear they have duplicate pages, because they will be selling 2 or 3 of the same item, and the only difference in the content of the page might just be a stock #. The majority of the pages though will have unique content. So - will we be penalized in some way for having anywhere from a handful to a few hundred pages that all point to the same link? If we are penalized, what's the suggested way to handle this? We still want to give users the option to go to the clients site, and we would still like to give a link back to the clients site to help their own SE rankings.

    Read the article

  • How to stop jQuery from returning tabs and spaces from formated code on .html() .val() .text() etc.

    - by brandonjp
    I've got an html table: <table><tr> <td>M1</td> <td>M2</td> <td>M3</td> <td>M4</td> </tr></table> and a simple jQ script: $('td').click(function(){ alert( $(this).html() ); }); That works just fine.... but in the real world, I've got several hundred table cells and the code is formatted improperly in places because of several people editing the page. So if the html is: <td> M1 </td> then the alert() is giving me all the tabs and returns and spaces: What can I do to get ONLY the text without the tabs and spaces? I've tried .html(), .val(), .text() to no avail. Thanks!

    Read the article

  • Is there a Windows utility that will let me do multiple programmatic find/replaces on text that I cu

    - by billmaya
    I've inherited some C# code that contains about a thousand lines of source that I need to modify, transforming it from this: newDataRow["to_dir"] = comboBox108.Text; To this: assetAttributes.Add("to_dir", comboBox108.Text); The lines occur in various places throughout the application in groups of 40 or 50. Modifying each line by hand in Visual Studio 2008 can be done but it's labor intensive and prone to errors. Is there a Windows utility out there that will let me cut and paste groups of code into it and then run some sort of reg-ex expression to transform the individual lines one-by-one? I'd also be willing to use some sort of VS 2008 add-in that performed the same set of reg-ex operations against a selection of code. Thanks in advance.

    Read the article

  • Elegant way to search for UTF-8 files with BOM?

    - by vog
    For debugging purposes, I need to recursively search a directory for all files which start with a UTF-8 byte order mark (BOM). My current solution is a simple shell script: find -type f | while read file do if [ "`head -c 3 -- "$file"`" == $'\xef\xbb\xbf' ] then echo "found BOM in: $file" fi done Or, if you prefer short, unreadable one-liners: find -type f|while read file;do [ "`head -c3 -- "$file"`" == $'\xef\xbb\xbf' ] && echo "found BOM in: $file";done It doesn't work with filenames that contain a line break, but such files are not to be expected anyway. Is there any shorter or more elegant solution? Are there any interesting text editors or macros for text editors?

    Read the article

  • HAProxy error: Some configuration options require full privileges, so global.uid cannot be changed

    - by Athena Wisdom
    After adding the line to /etc/haproxy/haproxy.cfg as part of creating a transparent proxy, source 0.0.0.0 usesrc clientip restarting haproxy starts giving an error ~# service haproxy reload * Reloading haproxy haproxy [ALERT] 230/153724 (1140) : [/usr/sbin/haproxy.main()] Some configuration options require full privileges, so global.uid cannot be changed. I'm already running service haproxy reload as root. What else do we have to do? Thank you!

    Read the article

  • upgrade from windows 2008 server CORE to full windows 2008 server

    - by laurens
    Possible Duplicate: Install GUI on Windows Server 2008 Core As I've seen there is not really a topic about this here... My question: Is there any means to upgrade from windows 2008 server CORE to full windows 2008 server? The server is used as Hyper-V Host machine. On the internet mostly I find: "no you'll have to reinstall" But maybe there's a workaround? Thanks in advance

    Read the article

  • Full screen flash lags on Windows 7

    - by Chmod
    I have been experiencing an issue since I've up upgraded to Windows 7 where all flash videos have horrible frame rates when run in full screen mode. I have tried multiple browsers and i have the latest version of flash and video drivers. I have tried re-installing flash and disabling hardware acceleration. The problem occurs on youtube and hulu. This is a clean install on a Core 2 duo with 4GB RAM and an Nvidia 8400M GS.

    Read the article

  • tproxy squid bridge very slow when cache is full

    - by Roberto
    I have installed a bridge tproxy proxy in a fast server with 8GB ram. The traffic is around 60Mb/s. When I start for first time the proxy (with the cache empty) the proxy works very well but when the cache becomes full (few hours later) the bridge goes very slow, the traffic goes below 10Mb/s and the proxy server becomes unusable. Any hints of what may be happening? I'm using: linux-2.6.30.10 iptables-1.4.3.2 squid-3.1.1 compiled with these options: ./configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --localstatedir=/var/lib --sysconfdir=/etc/squid --libexecdir=/usr/libexec/squid --localstatedir=/var --datadir=/usr/share/squid --enable-removal-policies=lru,heap --enable-icmp --disable-ident-lookups --enable-cache-digests --enable-delay-pools --enable-arp-acl --with-pthreads --with-large-files --enable-htcp --enable-carp --enable-follow-x-forwarded-for --enable-snmp --enable-ssl --enable-async-io=32 --enable-linux-netfilter --enable-epoll --disable-poll --with-maxfd=16384 --enable-err-languages=Spanish --enable-default-err-language=Spanish My squid.conf: cache_mem 100 MB memory_pools off acl manager proto cache_object acl localhost src 127.0.0.1/32 acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 acl to_localhost dst ::1/128 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl net-g1 src xxx.xxx.xxx.xxx/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow net-g1 from where browsing should be allowed http_access allow localnet http_access allow localhost http_access deny all http_port 3128 http_port 3129 tproxy hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid 8000 16 256 access_log none cache_log /var/log/squid/cache.log coredump_dir /var/spool/squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . I have this issue when the cache is full, but do not really know if it is because of that. Thanks in advance and sorry my english. roberto

    Read the article

  • List full timestamps of files in a tarball

    - by Mechanical snail
    I have a large tar archive and want to see the exact (nanosecond) timestamps that are stored for each file in the archive. In case it's relevant, the tarball is in POSIX-2001 format (tar --format=posix). tar --list --verbose displays the timestamps rounded off to the minute. For comparison, ls --full-time does what I want, but I'd rather not have to extract everything first because it's huge. For my purposes, command-line and GUI tools are both fine.

    Read the article

  • Adding features to an SQL Server 2008 standard with SP1

    - by poopa
    I have MS SQL Server 2008 with SP1 installed. I want to add the Full text feature to the installation. Do I need to run the SP1 update again after I add this feature? If I do try to run it (SQLServer2008SP1-KB968369-x64-ENU) it fails with this error: "There are no SQL Server instances or shared features that can be updated on this computer". Should I uninstall SP1 and re-install? Doesnt make much sense.

    Read the article

  • Metacity/GNOME full screen as default

    - by singpolyma
    I recently discovered the keyboard shortcut for metacity's "full screen" option, where a window is fully maximised with titlebar and border hidden. I like this mode a lot, and for one of my systems would like to make it the default for all new windows. How would I do that?

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >