Search Results

Search found 18693 results on 748 pages for 'browser extension'.

Page 250/748 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Redirecting 2 or more domains to same hosting server

    - by mtk
    I have domains A.com, A.co.in and A.in Purchased from site X. I have a hosting space/account purchased from site Y, which has provided me with 2 DNS entries that is to be replaced in the account at the site from where I purchase the domains. I have successfully changed the DNS entries of A.com to these 2 DNS entries and I am able to see my index.html page when I hit A.com. Problem On similar lines, I have changed the DNS entries to the same entries for A.co.in and A.in, but on hitting those sites in browser gives me no response and browser specific page of 'Site not found' is been seen. Please let me know, how to set this, so that when I hit any of the domain, the web-site is rendered from the hosting server? What am I doing wrong here? Note It has been more than 3 days after changing the DNS entries, so I don't think so this is a problem of DNS propagation, which I heard from some people. Please provide some detail explanation, as I am very very new to this. This is my first hosting ;) -Thanks

    Read the article

  • Clutter for game GUI

    - by tjameson
    I'm pretty new to game development, having only written a simple 3d game for a class project, but I'd like to get started on a bigger project. I'm writing an MMORPG to run in both the browser (WebGL) and natively (OpenGL ES 2). In choosing a GUI toolkit, I'm trying to find a style that work work natively and would be simple to emulate in WebGL. I am considering using D or Go for writing my game, so interfacing with C++ libraries will be difficult, if not impossible. Of course, the language isn't the end goal here, so if using C++ will save considerable time, I'll bite the bullet and use that. In order to reduce the amount of code I'll have to write for the browser, I'm considering using something simple like Clutter for basic abstractions, which I think will be pretty easy to emulate (layered canvases maybe?). Does anyone have experience using Clutter for a 3d game? Note: I haven't used any game development libraries, and I only have limited experience with GUI libraries. I do have HTML+CSS experience, so maybe librocket is a viable solution?

    Read the article

  • Change XAMPP's htdocs web root folder to another one

    - by vitto
    I'm trying to change the XAMPP's web root default directory /opt/lampp/htocs to another one like /home/me/Dropbox/public_html without success. I've edited the file /opt/lampp/etc/httpd.conf # old line: DocumentRoot "/opt/lampp/htdocs" DocumentRoot "/home/me/Dropbox/public_html" #...etc... # old line: <Directory "/opt/lampp/htdocs"> <Directory "/home/me/Dropbox/Work/public_html"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # etc... I've did this as said in this article: Using Ubuntu One to synchronise htdocs? Then I've restarted Apache and I've got a permission error 403 on every page I've called with the web browser. So I've changed folder and files permission to 755. I've did this as said in this article: What file permissions should I set on web root? The problem still remains the same, I have the 403 error on every page I try to reach with the web browser. I have the same problem on a Mac using XAMPP. So everythig works fine if the folder remains the original /opt/lampp/htocs. How can I change it correctly?

    Read the article

  • Good links somehow being converted to ones with a PHP redirect (not a virus)

    - by Rebecca
    This has happened to links we put on web pages and in emails. We might put www.oursite.org/work/ but when I view source it shows up as webmail.ourhosting.ca/hwebmail/services/go.php?url=https%3A%2F%2Fwww.oursite.org%2F%2work%2F This ends up at the webmail login page for our web host. But only some of the people who click the link get the login page; others go directly to the original page we intended. We don't want it to go to the webmail login page, nobody needs to log in to our web site. This occurs for links to pages on our site, but also to links to other sites that we put in emails or in posts. It seems to be browser independent as well as e-mail client independent as we variously have used Firefox and Chrome as well as MS Outlook and Thunderbird. I've tried to resolve the issue with our webhost but they keep telling me they don't support our browser, or our email client (i.e., they don't understand the issue). At the moment, our only option is to try another web host just to get rid of their login. Any ideas about what's going on?

    Read the article

  • Remember me or not?

    - by taeja87
    I was told to post this on webmasters instead of stackoverflow. Is it safe to have the remember me feature? Would it be somewhat safe (knowing it won't be 100% safe) to allow users to close their browser and come back still logged in? I am not exacting sure which way I should go after reading different things about safety. I learned about session fixation and implemented security to add more protection. From experience, if remember me is checked then only your username/email appears and requires you to re-enter your password. Other sites allow you to come in and out as much as you way without logging out after the browser has closed. If it is safe, what is the current best way of implementing remember/stay logged in? http://stackoverflow.com/questions/3531377/best-practise-for-remember-me-feature http://stackoverflow.com/questions/5087969/what-is-the-code-for-stay-logged-in-or-remember-me-while-user-login-in-php http://bytes.com/topic/php/answers/881197-stay-logged-remember-me-php-sessions-cookies http://security.stackexchange.com/questions/41/good-session-practices Also: The site I am working on is email & password login type.

    Read the article

  • Location-Based redirection and duplication in sub-directories affecting SEO

    - by Joshua
    I currently own the website www.xyz.com. The website has a sub-directory for each of the 3 target countries: .../en-US/ (United States), .../es-MX/ (Mexico), and .../es-DO/ (Dominican Republic). I have two main questions about this setup: Currently, the main domain/root (xyz.com) contains a blank index.php file, but I would like for a user to be redirected to one of the sub-directories based on their regional location. What is the best way to accomplish this? I have looked at using browser language-based redirection, but how would I know whether to direct a user to the MX or DO site if the browser language is set to spanish? Is there a way to detect a user's geographic location? Also, the 3 websites are practically identical except they all have 3 unique color schemes and the US site is in english while the MX and DO sites are in spanish. My problem is that I believe GoogleBot is penalizing/banning my site because the spanish text on the MX and DO pages are nearly identical and are thus marked as duplicates/spam. Is there a way to avoid this?

    Read the article

  • Release Notes for 6/14/2012

    Here are the notes for this week’s release: Diffs in Pull Requests and Commits We altered the way we display diffs across commits and pull requests to maximize the amount of vertical real estate devoted to the diff. Before, the viewport for diffs was always snapped to the height of the browser, which meant that on lower resolutions, the amount of space for viewing diffs could become very tiny. Now, the majority of the browser vertical space is devoted to viewing the diffs. Let us know what you think! Bug Fixes Fixed an issue where returning to the list of files changed from a diff would sometimes not show the list of files. Fixed the dialogs for approving and denying requests to join projects. Fixed various issues around validation of project details when publishing a project. Fixed an issue that caused the formatting of our tabs in pull requests to not display properly. Fixed an issue where users browsing Unicode files in a Git project would see error pages. Fixed various issues where the option to subscribe to notifications would not appear properly. Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • cookie not being sent when requesting JS

    - by Mala
    I host a webservice, and provide my members with a Javascript bookmarklet, which loads a JS sript from my server. However, clients must be logged in, in order to receive the JS script. This works for almost everybody. However, some users on setups (i.e. browser/OS) that are known to work for other people have the following problem: when they request the script via the javascript bookmarklet from my server, their cookie from my server does not get included with the request, and as such they are always "not authenticated". I'm making the request in the following way: var myScript = eltCreate('script'); myScript.setAttribute('src','http://myserver.com/script'); document.body.appendChild(myScript); In a fit of confused desperation, I changed the script page to simply output "My cookie has [x] elements" where [x] is count($_COOKIE). If this extremely small subset of users requests the script via the normal method, the message reads "My cookie has 0 elements". When they access the URL directly in their browser, the message reads "My cookie has 7 elements". What on earth could be going on?!

    Read the article

  • Google Rolls Out Secured Search. It’s Slightly Different From Regular Search

    - by Gopinath
    Google rolled out secured version of it’s search engine at https://google.com (did you notice https instead of http?). This search engine lets everyone to use Google search in a secured way. How is it secured? When you use https://google.com, the data exchanged between your browser and Google servers is encrypted to make sure that no one can sniff it. Is my search history secured from Google? No. The search queries you submit to Google are stored in Google servers. There is no change Google’s search history recording. Any differences between Regular Search and Secured Search Results? Yes. Secured search is slightly different from regular search. When you are accessing Google Secured Search Image search options will not be available on the left side bar. Site may respond slow compared to regular search site as there is a overhead to establish between your browser and the server. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • At what point does caching become necessary for a web application?

    - by Zaemz
    I'm considering the architecture for a web application. It's going to be a single page application that updates itself whenever the user selects different information on several forms that are available that are on the page. I was thinking that it shouldn't be good to rely on the user's browser to correctly interpret the information and update the view, so I'll send the user's choices to the server, and then get the data, send it back to the browser, and update the view. There's a table with 10,000 or so rows in a MySQL database that's going to be accessed pretty often, like once every 5-30 seconds for each user. I'm expecting 200-300 concurrent users at one time. I've read that a well designed relational database with simple queries are nothing for a RDBMS to handle, really, but I would still like to keep things quick for the client. Should this even be a concern for me at the moment? At what point would it be helpful to start using a separate caching service like Memcached or Redis, or would it even be necessary? I know that MySQL caches popular queries and the results, would this suffice?

    Read the article

  • Live HTML Preview

    - by netbeanstips
    When you need to edit an HTML document in NetBeans you may find useful this little plugin that adds a Preview tab to HTML editor window. The plugin works with some small issues in NetBeans 7.4 but I recommend using development builds instead. So install the plugin, restart NetBeans and open any HTML document. Notice there's a Preview button in editor's toolbar (see the red rectangle in the picture below). Now split the editor window by dragging the split button at top right corner. You can also use menu View - Split - Vertically.  Then in the bottom split part toggle Preview button. You will get a live preview of your HTML source code. The preview pane will auto-refresh as you edit the HTML code. There are even some handy tools in Preview toolbar, for example you can resize the preview browser to match the screen dimensions of various device types. I know there is full-blown HTML5 support in NetBeans 7.4. But if you need to edit a single document or when you're running Java-only NetBeans distribution this plugin may come handy... Note: The plugin is built on top of embedded WebKit browser which is based on JavaFX WebView component. So there might be some issue when using the plugin on some flavors of Linux.

    Read the article

  • jquery is not working over local network [migrated]

    - by Kortyell Davis
    i have a fedora server running apache web server. the server is connected to a home network. i have a laptop connected to the same network. i can enter the ip address of my server into the browser of my laptop and pull up the index.html file located in the document root directory of the fedora home server. the index.html file contains jquery code. the jquery code only works when i open it locally in my browser (e.g. right click open with firefox), but when i attempt to view the webpage from my laptop the jquery code is not executed. the code is here below. <script type="text/javascript" src="jquery-1.8.2.js"></script> ' $(document).ready(function() { $('#form').hide(); $('input[type=text]').focus(function() { $(this).val(''); }); $('input[type=password]').focus(function() { $(this).val(''); }); $('.form').hide(); $('#log').click(function(){ $('#form').toggle(); }); $('#reg').click(function(){ $('.form').toggle(); }); });

    Read the article

  • qwebview in pyside after packaged with pyinstaller goes wrong

    - by truease.com
    Here's my code import sys from PySide.QtCore import * from PySide.QtGui import * from PySide.QtWebKit import * from encodings import * from codecs import * class BrowserWindow( QWidget ): def __init__( self, parent=None ): QWidget.__init__( self, parent ) self.Setup() self.SetupEvent() def Setup( self ): self.setWindowTitle( u"Truease Speedy Browser" ) self.addr_input = QLineEdit() self.addr_go = QPushButton( "GO" ) self.addr_bar = QHBoxLayout() self.addr_bar.addWidget( self.addr_input ) self.addr_bar.addWidget( self.addr_go ) for attr in [ QWebSettings.AutoLoadImages, QWebSettings.JavascriptEnabled, QWebSettings.JavaEnabled, QWebSettings.PluginsEnabled, QWebSettings.JavascriptCanOpenWindows, QWebSettings.JavascriptCanAccessClipboard, QWebSettings.DeveloperExtrasEnabled, QWebSettings.SpatialNavigationEnabled, QWebSettings.OfflineStorageDatabaseEnabled, QWebSettings.OfflineWebApplicationCacheEnabled, QWebSettings.LocalStorageEnabled, QWebSettings.LocalStorageDatabaseEnabled, QWebSettings.LocalContentCanAccessRemoteUrls, QWebSettings.LocalContentCanAccessFileUrls, ]: QWebSettings.globalSettings().setAttribute( attr, True ) self.web_view = QWebView() self.web_view.load( "http://www.baidu.com" ) layout = QVBoxLayout() layout.addLayout( self.addr_bar ) layout.addWidget( self.web_view ) self.setLayout( layout ) def SetupEvent( self ): self.connect( self.addr_input, SIGNAL("editingFinished()"), self, SLOT("Load()"), ) self.connect( self.addr_go, SIGNAL("pressed()"), self, SLOT("Load()") ) self.connect( self.web_view, SIGNAL("urlChanged(const QUrl&)"), self, SLOT("SetURL()"), ) def Load( self, *args, **kwargs ): url = self.GetCleanedURL() if url != self.CurrentURL(): self.web_view.load( url ) def SetURL( self, *args, **kwargs ): self.addr_input.setText( self.CurrentURL() ) def GetCleanedURL( self ): url = self.addr_input.text().strip() if not url.startswith("http"): url = "http://" + url return url def CurrentURL( self ): url = self.web_view.url().toString() return url def Main(): app = QApplication( sys.argv ) widget = BrowserWindow() widget.show() return app.exec_() if __name__ == '__main__': sys.exit( Main() ) I works well when i using python browser.py. but it goes wrong after packaged with pyinstaller -w browser.py. it doesn't load images can only display correct text in utf-8 And this is the pyinstaller output: E:\true\wuk\app2>pyinstaller -w b.py 16 INFO: wrote E:\true\wuk\app2\b.spec 16 INFO: Testing for ability to set icons, version resources... 32 INFO: ... resource update available 32 INFO: UPX is not available. 46 INFO: Processing hook hook-os 141 INFO: Processing hook hook-time 157 INFO: Processing hook hook-cPickle 218 INFO: Processing hook hook-_sre 312 INFO: Processing hook hook-cStringIO 407 INFO: Processing hook hook-encodings 421 INFO: Processing hook hook-codecs 750 INFO: Processing hook hook-httplib 750 INFO: Processing hook hook-email 843 INFO: Processing hook hook-email.message 1046 WARNING: library python%s%s required via ctypes not found 1171 INFO: Extending PYTHONPATH with E:\true\wuk\app2 1171 INFO: checking Analysis 1171 INFO: building because b.py changed 1171 INFO: running Analysis out00-Analysis.toc 1171 INFO: Adding Microsoft.VC90.CRT to dependent assemblies of final executable 1171 INFO: Searching for assembly x86_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21022.8_x-ww ... 1171 INFO: Found manifest C:\WINDOWS\WinSxS\Manifests\x86_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21022.8_x-ww_d08d0375.manifest 1187 INFO: Searching for file msvcr90.dll 1187 INFO: Found file C:\WINDOWS\WinSxS\x86_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21022.8_x-ww_d08d0375\msvcr90.dll 1187 INFO: Searching for file msvcp90.dll 1187 INFO: Found file C:\WINDOWS\WinSxS\x86_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21022.8_x-ww_d08d0375\msvcp90.dll 1187 INFO: Searching for file msvcm90.dll 1187 INFO: Found file C:\WINDOWS\WinSxS\x86_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21022.8_x-ww_d08d0375\msvcm90.dll 1266 INFO: Analyzing D:\Applications\Python\lib\site-packages\pyinstaller-2.1-py2.7.egg\PyInstaller\loader\_pyi_bootstrap.py 1266 INFO: Processing hook hook-os 1282 INFO: Processing hook hook-site 1296 INFO: Processing hook hook-encodings 1391 INFO: Processing hook hook-time 1407 INFO: Processing hook hook-cPickle 1468 INFO: Processing hook hook-_sre 1578 INFO: Processing hook hook-cStringIO 1671 INFO: Processing hook hook-codecs 2016 INFO: Processing hook hook-httplib 2016 INFO: Processing hook hook-email 2109 INFO: Processing hook hook-email.message 2312 WARNING: library python%s%s required via ctypes not found 2468 INFO: Processing hook hook-pydoc 2516 INFO: Analyzing D:\Applications\Python\lib\site-packages\pyinstaller-2.1-py2.7.egg\PyInstaller\loader\pyi_importers.py 2609 INFO: Analyzing D:\Applications\Python\lib\site-packages\pyinstaller-2.1-py2.7.egg\PyInstaller\loader\pyi_archive.py 2687 INFO: Analyzing D:\Applications\Python\lib\site-packages\pyinstaller-2.1-py2.7.egg\PyInstaller\loader\pyi_carchive.py 2782 INFO: Analyzing D:\Applications\Python\lib\site-packages\pyinstaller-2.1-py2.7.egg\PyInstaller\loader\pyi_os_path.py 2782 INFO: Analyzing b.py 2796 INFO: Processing hook hook-PySide 2875 INFO: Hidden import 'codecs' has been found otherwise 2875 INFO: Hidden import 'encodings' has been found otherwise 2875 INFO: Looking for run-time hooks 7766 INFO: Using Python library C:\WINDOWS\system32\python27.dll 7796 INFO: E:\true\wuk\app2\build\b\out00-Analysis.toc no change! 7796 INFO: checking PYZ 7812 INFO: checking PKG 7812 INFO: building because E:\true\wuk\app2\build\b\b.exe.manifest changed 7812 INFO: building PKG (CArchive) out00-PKG.pkg 7828 INFO: checking EXE 7843 INFO: rebuilding out00-EXE.toc because pkg is more recent 7843 INFO: building EXE from out00-EXE.toc 7843 INFO: Appending archive to EXE E:\true\wuk\app2\build\b\b.exe 7843 INFO: checking COLLECT 7843 INFO: building COLLECT out00-COLLECT.toc Use pyinstaller browser.py, and in the console window i got QFont::setPixelSize: Pixel size <= 0 (0) QSslSocket: cannot call unresolved function SSLv23_client_method QSslSocket: cannot call unresolved function SSL_CTX_new QSslSocket: cannot call unresolved function SSL_library_init QSslSocket: cannot call unresolved function ERR_get_error QSslSocket: cannot call unresolved function SSLv23_client_method QSslSocket: cannot call unresolved function SSL_CTX_new QSslSocket: cannot call unresolved function SSL_library_init QSslSocket: cannot call unresolved function ERR_get_error QSslSocket: cannot call unresolved function SSLv23_client_method QSslSocket: cannot call unresolved function SSL_CTX_new QSslSocket: cannot call unresolved function SSL_library_init QSslSocket: cannot call unresolved function ERR_get_error QSslSocket: cannot call unresolved function SSLv23_client_method QSslSocket: cannot call unresolved function SSL_CTX_new QSslSocket: cannot call unresolved function SSL_library_init QSslSocket: cannot call unresolved function ERR_get_error QFont::setPixelSize: Pixel size <= 0 (0)

    Read the article

  • View Generated Source (After AJAX/JavaScript) in C#

    - by Michael La Voie
    Is there a way to view the generated source of a web page (the code after all AJAX calls and JavaScript DOM manipulations have taken place) from a C# application without opening up a browser from the code? Viewing the initial page using a WebRequest or WebClient object works ok, but if the page makes extensive use of JavaScript to alter the DOM on page load, then these don't provide an accurate picture of the page. I have tried using Selenium and Watin UI testing frameworks and they work perfectly, supplying the generated source as it appears after all JavaScript manipulations are completed. Unfortunately, they do this by opening up an actual web browser, which is very slow. I've implemented a selenium server which offloads this work to another machine, but there is still a substantial delay. Is there a .Net library that will load and parse a page (like a browser) and spit out the generated code? Clearly, Google and Yahoo aren't opening up browsers for every page they want to spider (of course they may have more resources than me...). Is there such a library or am I out of luck unless I'm willing to dissect the source code of an open source browser? SOLUTION Well, thank you everyone for you're help. I have a working solution that is about 10X faster then Selenium. Woo! Thanks to this old article from beansoftware I was able to use the System.Windows.Forms.WebBrwoswer control to download the page and parse it, then give em the generated source. Even though the control is in Windows.Forms, you can still run it from Asp.Net (which is what I'm doing), just remember to add System.Window.Forms to your project references. There are two notable things about the code. First, the WebBrowser control is called in a new thread. This is because it must run on a single threaded apartment. Second, the GeneratedSource variable is set in two places. This is not due to an intelligent design decision :) I'm still working on it and will update this answer when I'm done. wb_DocumentCompleted() is called multiple times. First when the initial HTML is downloaded, then again when the first round of JavaScript completes. Unfortunately, the site I'm scraping has 3 different loading stages. 1) Load initial HTML 2) Do first round of JavaScript DOM manipulation 3) pause for half a second then do a second round of JS DOM manipulation. For some reason, the second round isn't cause by the wb_DocumentCompleted() function, but it is always caught when wb.ReadyState == Complete. So why not remove it from wb_DocumentCompleted()? I'm still not sure why it isn't caught there and that's where the beadsoftware article recommended putting it. I'm going to keep looking into it. I just wanted to publish this code so anyone who's interested can use it. Enjoy! using System.Threading; using System.Windows.Forms; public class WebProcessor { private string GeneratedSource{ get; set; } private string URL { get; set; } public string GetGeneratedHTML(string url) { URL = url; Thread t = new Thread(new ThreadStart(WebBrowserThread)); t.SetApartmentState(ApartmentState.STA); t.Start(); t.Join(); return GeneratedSource; } private void WebBrowserThread() { WebBrowser wb = new WebBrowser(); wb.Navigate(URL); wb.DocumentCompleted += new WebBrowserDocumentCompletedEventHandler( wb_DocumentCompleted); while (wb.ReadyState != WebBrowserReadyState.Complete) Application.DoEvents(); //Added this line, because the final HTML takes a while to show up GeneratedSource= wb.Document.Body.InnerHtml; wb.Dispose(); } private void wb_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { WebBrowser wb = (WebBrowser)sender; GeneratedSource= wb.Document.Body.InnerHtml; } }

    Read the article

  • JSF SSL Hazzard

    - by java beginner
    In my application it is required that only certain pages need to be secured using SSL so I configured it security-constraint> <display-name>Security Settings</display-name> <web-resource-collection> <web-resource-name>SSL Pages</web-resource-name> <description/> <url-pattern>/*.jsp</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <user-data-constraint> <description>CONFIDENTIAL requires SSL</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> and added filter http://blogs.sun.com/jluehe/entry/how_to_downshift_from_https but only one hazard is there. I am using it with richFaces. Once it goes to HTTPS its not changing the page—I mean if I perform post action it doesn't actually happen. But if I do it from the local machine's browser it works perfectly, from a remote browser it stucks with HTTPS and not changing after that. Here is my web.xml's snap: <filter> <filter-name>MyFilter</filter-name> <filter-class>MyFilter</filter-class> <init-param> <param-name>httpPort</param-name> <param-value>8080</param-value> </init-param> </filter> <filter-mapping> <filter-name>MyFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <security-constraint> <web-resource-collection> <web-resource-name>Protected resource</web-resource-name> <url-pattern>somePattern</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> and some other filters of richfaces. Problem is strange. If I try to access the web app from local's machine's browser it works fine but in remote machine's browser once it get into HTTP, all the forms of that page aswell as href stops working.(JSF,facelet is used.)

    Read the article

  • There's some html content in a file which is downloaded using PHP scripts on firefox

    - by chqiu
    I use the following to download a file with PHP: ob_start(); $browser = id_browser(); header('Content-Type: '.(($browser=='IE' || $browser=='OPERA')? 'application/octetstream':'application/octet-stream')); header('Expires: '.gmdate('D, d M Y H:i:s').' GMT'); header('Content-Transfer-Encoding: binary'); header('Content-Length: '.filesize(realpath($fullpath))); //header("Content-Encoding: none"); if($browser == 'IE') { header('Content-Disposition: attachment; filename="'.$file.'"'); header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); header('Pragma: public'); } else { header('Content-Disposition: attachment; filename="'.$file.'"'); header('Cache-Control: no-cache, must-revalidate'); header('Pragma: no-cache'); } //@set_time_limit( 0 ); ReadFileChunked(utf8_decode($fullpath)); ob_end_flush(); The source code of ReadFileChunked is: function ReadFileChunked($filename,$retbytes=true) { $chunksize = 1*(1024*1024); $remainFileSize = filesize($filename); if($remainFileSize < $chunksize) $chunksize = $remainFileSize; $buffer = ''; $cnt =0; // $handle = fopen($filename, 'rb'); //echo $filename."<br>"; $handle = fopen($filename, 'rb'); if ($handle === false) { //echo 1; return false; } //echo 2; while (!feof($handle)) { //echo "current remain file size $remainFileSize<br>"; //echo "current chunksize $chunksize<br>"; $buffer = fread($handle, $chunksize); echo $buffer; sleep(1); ob_flush(); flush(); if ($retbytes) { $cnt += strlen($buffer); } $remainFileSize -= $chunksize; if($remainFileSize == 0) break; if($remainFileSize < $chunksize) { $chunksize = $remainFileSize; } } $status = fclose($handle); if ($retbytes && $status) { return $cnt; // return num. bytes delivered like readfile() does. } return $status; } The question is : The file downloaded will contiain some html tags which are the content of the html code generated by the php. The error will happened when downloading the txt file with the file size smaller than 4096 bytes. Please help me to slove this problem , thank you very much! Chu

    Read the article

  • urllib2 misbehaving with dynamically loaded content

    - by Sheena
    Some Code headers = {} headers['user-agent'] = 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0' headers['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' headers['Accept-Language'] = 'en-gb,en;q=0.5' #headers['Accept-Encoding'] = 'gzip, deflate' request = urllib.request.Request(sURL, headers = headers) try: response = urllib.request.urlopen(request) except error.HTTPError as e: print('The server couldn\'t fulfill the request.') print('Error code: {0}'.format(e.code)) except error.URLError as e: print('We failed to reach a server.') print('Reason: {0}'.format(e.reason)) else: f = open('output/{0}.html'.format(sFileName),'w') f.write(response.read().decode('utf-8')) A url http://groupon.cl/descuentos/santiago-centro The situation Here's what I did: enable javascript in browser open url above and keep an eye on the console disable javascript repeat step 2 use urllib2 to grab the webpage and save it to a file enable javascript open the file with browser and observe console repeat 7 with javascript off results In step 2 I saw that a whole lot of the page content was loaded dynamically using ajax. So the HTML that arrived was a sort of skeleton and ajax was used to fill in the gaps. This is fine and not at all surprising Since the page should be seo friendly it should work fine without js. in step 4 nothing happens in the console and the skeleton page loads pre-populated rendering the ajax unnecessary. This is also completely not confusing in step 7 the ajax calls are made but fail. this is also ok since the urls they are using are not local, the calls are thus broken. The page looks like the skeleton. This is also great and expected. in step 8: no ajax calls are made and the skeleton is just a skeleton. I would have thought that this should behave very much like in step 4 question What I want to do is use urllib2 to grab the html from step 4 but I cant figure out how. What am I missing and how could I pull this off? To paraphrase If I was writing a spider I would want to be able to grab plain ol' HTML (as in that which resulted in step 4). I dont want to execute ajax stuff or any javascript at all. I don't want to populate anything dynamically. I just want HTML. The seo friendly site wants me to get what I want because that's what seo is all about. How would one go about getting plain HTML content given the situation I outlined? To do it manually I would turn off js, navigate to the page and copy the html. I want to automate this. stuff I've tried I used wireshark to look at packet headers and the GETs sent off from my pc in steps 2 and 4 have the same headers. Reading about SEO stuff makes me think that this is pretty normal otherwise techniques such as hijax wouldn't be used. Here are the headers my browser sends: Host: groupon.cl User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Here are the headers my script sends: Accept-Encoding: identity Host: groupon.cl Accept-Language: en-gb,en;q=0.5 Connection: close Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 The differences are: my script has Connection = close instead of keep-alive. I can't see how this would cause a problem my script has Accept-encoding = identity. This might be the cause of the problem. I can't really see why the host would use this field to determine the user-agent though. If I change encoding to match the browser request headers then I have trouble decoding it. I'm working on this now... watch this space, I'll update the question as new info comes up

    Read the article

  • jquery drag and drop script and problem in reading json array

    - by Mac Taylor
    i made a script , exactly like wordpress widgets page and u can drag and drop objects this is my jquery script : <script type="text/javascript" >$(function(){ $('.widget') .each(function(){ $(this).hover(function(){ $(this).find('h4').addClass('collapse'); }, function(){ $(this).find('h4').removeClass('collapse'); }) .find('h4').hover(function(){ $(this).find('.in-widget-title').css('visibility', 'visible'); }, function(){ $(this).find('.in-widget-title').css('visibility', 'hidden'); }) .click(function(){ $(this).siblings('.widget-inside').toggle(); //Save state on change of collapse state of panel updateWidgetData(); }) .end() .find('.in-widget-title').css('visibility', 'hidden'); }); $('.column').sortable({ connectWith: '.column', handle: 'h4', cursor: 'move', placeholder: 'placeholder', forcePlaceholderSize: true, opacity: 0.4, start: function(event, ui){ //Firefox, Safari/Chrome fire click event after drag is complete, fix for that if($.browser.mozilla || $.browser.safari) $(ui.item).find('.widget-inside').toggle(); }, stop: function(event, ui){ ui.item.css({'top':'0','left':'0'}); //Opera fix if(!$.browser.mozilla && !$.browser.safari) updateWidgetData(); } }) .disableSelection(); }); function updateWidgetData(){ var items=[]; $('.column').each(function(){ var columnId=$(this).attr('id'); $('.widget', this).each(function(i){ var collapsed=0; if($(this).find('.widget-inside').css('display')=="none") collapsed=1; //Create Item object for current panel var item={ id: $(this).attr('id'), collapsed: collapsed, order : i, column: columnId }; //Push item object into items array items.push(item); }); }); //Assign items array to sortorder JSON variable var sortorder={ items: items }; //Pass sortorder variable to server using ajax to save state $.post("blocks.php"+"&order="+$.toJSON(sortorder), function(data){ $('#console').html(data).fadeIn("slow"); }); } </script> main part is saving object orders in table and this is my php part : function stripslashes_deep($value) { $value = is_array($value) ? array_map('stripslashes_deep', $value) : stripslashes($value); return $value; } $order = $_GET['order']; $order = sql_quote($order); if(empty($order)){ echo "Invalid data"; exit; } global $db,$prefix; if (get_magic_quotes_gpc()) { $_POST = array_map('stripslashes_deep', $_POST); $_GET = array_map('stripslashes_deep', $_GET); $_COOKIE = array_map('stripslashes_deep', $_COOKIE); $_REQUEST = array_map('stripslashes_deep', $_REQUEST); } $data=json_decode($order); foreach($newdata->items as $item) { //Extract column number for panel $col_id=preg_replace('/[^\d\s]/', '', $item->column); //Extract id of the panel $widget_id=preg_replace('/[^\d\s]/', '', $item->id); $sql="UPDATE blocks_tbl SET bposition='$col_id', weight='".$item->order."' WHERE id='".$widget_id."'"; mysql_query($sql) or die('Error updating widget DB'); } print_r($order); now forexample the output is this : items\":[{\"id\":\"item26\",\"collapsed\":1,\"order\":0,\"column\":\"c\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":0,\"column\":\"i\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":1,\"column\":\"i\"},{\"id\":\"item1\",\"collapsed\":1,\"order\":2,\"column\":\"i\"},{\"id\":\"item3\",\"collapsed\":1,\"order\":3,\"column\":\"i\"},{\"id\":\"item16\",\"collapsed\":1,\"order\":4,\"column\":\"i\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":5,\"column\":\"i\"},{\"id\":\"item6\",\"collapsed\":1,\"order\":6,\"column\":\"i\"},{\"id\":\"item17\",\"collapsed\":1,\"order\":7,\"column\":\"i\"},{\"id\":\"item19\",\"collapsed\":1,\"order\":8,\"column\":\"i\"},{\"id\":\"item10\",\"collapsed\":1,\"order\":9,\"column\":\"i\"},{\"id\":\"item11\",\"collapsed\":1,\"order\":10,\"column\":\"i\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":0,\"column\":\"l\"},{\"id\":\"item5\",\"collapsed\":1,\"order\":1,\"column\":\"l\"},{\"id\":\"item8\",\"collapsed\":1,\"order\":2,\"column\":\"l\"},{\"id\":\"item13\",\"collapsed\":1,\"order\":3,\"column\":\"l\"},{\"id\":\"item21\",\"collapsed\":1,\"order\":4,\"column\":\"l\"},{\"id\":\"item28\",\"collapsed\":1,\"order\":5,\"column\":\"l\"},{\"id\":\"item7\",\"collapsed\":1,\"order\":0,\"column\":\"r\"},{\"id\":\"item20\",\"collapsed\":1,\"order\":1,\"column\":\"r\"},{\"id\":\"item15\",\"collapsed\":1,\"order\":2,\"column\":\"r\"},{\"id\":\"item18\",\"collapsed\":1,\"order\":3,\"column\":\"r\"},{\"id\":\"item14\",\"collapsed\":1,\"order\":4,\"column\":\"r\"}]} question is how can i find out column_id or order im a little bit confused

    Read the article

  • .NET Reflector 6, .NET Reflector Pro, TestDriven.NET, .NET 4.0 and Mono

    - by Bart Read
    By now you may well have noticed that .NET Reflector 6 and .NET Reflector Pro are out in the wild. The official launch happened today, although we actually put the software out last Thursday as part of a phased release plan to ensure that everything went smoothly today which, so far, it seems to have done. Clive and Alex have already talked extensively about what the new version and the Pro extension do, so I'm not going to go into any detail here, but I've linked to their blogs at the bottom. What...(read more)

    Read the article

  • VB.NET IF() Coalesce and “Expression Expected” Error

    - by Jeff Widmer
    I am trying to use the equivalent of the C# “??” operator in some VB.NET code that I am working in. This StackOverflow article for “Is there a VB.NET equivalent for C#'s ?? operator?” explains the VB.NET IF() statement syntax which is exactly what I am looking for... and I thought I was going to be done pretty quickly and could move on. But after implementing the IF() statement in my code I started to receive this error: Compiler Error Message: BC30201: Expression expected. And no matter how I tried using the “IF()” statement, whenever I tried to visit the aspx page that I was working on I received the same error. This other StackOverflow article Using VB.NET If vs. IIf in binding/rendering expression indicated that the VB.NET IF() operator was not available until VS2008 or .NET Framework 3.5.  So I checked the Web Application project properties but it was targeting the .NET Framework 3.5: So I was still not understanding what was going on, but then I noticed the version information in the detailed compiler output of the error page: This happened to be a C# project, but with an ASPX page with inline VB.NET code (yes, it is strange to have that but that is the project I am working on).  So even though the project file was targeting the .NET Framework 3.5, the ASPX page was being compiled using the .NET Framework 2.0.  But why?  Where does this get set?  How does ASP.NET know which version of the compiler to use for the inline code? For this I turned to the web.config.  Here is the system.codedom/compilers section that was in the web.config for this project: <system.codedom>     <compilers>         <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">             <providerOption name="CompilerVersion" value="v3.5" />             <providerOption name="WarnAsError" value="false" />         </compiler>     </compilers> </system.codedom> Keep in mind that this is a C# web application project file but my aspx file has inline VB.NET code.  The web.config does not have any information for how to compile for VB.NET so it defaults to .NET 2.0 (instead of 3.5 which is what I need). So the web.config needed to include the VB.NET compiler option.  Here it is with both the C# and VB.NET options (I copied the VB.NET config from a new VB.NET Web Application project file).     <system.codedom>         <compilers>             <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">                 <providerOption name="CompilerVersion" value="v3.5" />                 <providerOption name="WarnAsError" value="false" />             </compiler>       <compiler language="vb;vbs;visualbasic;vbscript" extension=".vb" warningLevel="4" type="Microsoft.VisualBasic.VBCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">         <providerOption name="CompilerVersion" value="v3.5"/>         <providerOption name="OptionInfer" value="true"/>         <providerOption name="WarnAsError" value="false"/>       </compiler>     </compilers>     </system.codedom>   So the inline VB.NET code on my aspx page was being compiled using the .NET Framework 2.0 when it really needed to be compiled with the .NET Framework 3.5 compiler in order to take advantage of the VB.NET IF() coalesce statement.  Without the VB.NET web.config compiler option, the default is to compile using the .NET Framework 2.0 and the VB.NET IF() coalesce statement does not exist (at least in the form that I want it in).  FYI, there is an older IF statement in VB.NET 2.0 compiler which is why it is giving me the unusual “Expression Expected” error message – see this article for when VB.NET got the new updated version. EDIT (2011-06-20): I had made a wrong assumption in the first version of this blog post.  After a little more research and investigation I was able to figure out that the issue was in the web.config and not with the IIS App Pool.  Thanks to the comment from James which forced me to look into this again.

    Read the article

  • CodePlex Daily Summary for Monday, February 22, 2010

    CodePlex Daily Summary for Monday, February 22, 2010New ProjectsAVDB: System to keep track of orders and the inventory of televisions, DVDs, VCRs etcBooky: Booky is an online Bookmark Management Tool. Gear Up for Lord of the Rings Online (lotro): Windows utility for checking what your LOTRO character currently has equipped and figuring out gear you should get to improve your stats.GotSharp Extensions: GotSharp Extensions is a set of helpful classes and extension methods that can make your coding experience easier and cleaner. Halfwit: A minimalist WPF Twitter client.HOA Starter Kit: A community subdivision website starter kit. First draft.Lua For Irony: Project to define the Lua language using the Irony (http://irony.codeplex.com/) development kit. This work is based heavily on the work done for V...MimeCloud: Scalable .NET Digital Asset & Media Management: MimeCloud is a scalable digital asset library & media management toolset. Founded by Alex Norcliffe and Peter Miller Written by people who have b...Parallel Mandelbrot Set solver: Solving the Mandelbrot set using the Parallel class in .NET 4.0. Showing the resulting image in a WPF application. The solution file requires VS 2010.Pomogad - Pomodoro Windows Gadget: Você usa Pomodoro Technique? Não sabe o que é? Veja aqui http://www.pomodorotechnique.com Agora que você já sabe, que tal usar essa técnica? E p...PostCrap - flyweight .NET AOP post compiler: PostCrap is a flyweight attribute based aspect injection .NET post compiler It is written in C# and uses Mono.Cecil to modify assemblies and injec...Software + Service Reference Demo Kit: MS China Developer and Platform Evangelism team created an End-2-End demo for Software + Service. Yet Another SharePoint Tool: YEAST provides you with a simple to integrate approach to generating SharePoint solution packages as part of a Visual Studio project. Zen Coding Visual Studio Plugin: Zen Coding for Visual Studio is plugin for HTML and CSS hi-speed codingNew Releases.Net MSBuild Google Closure Compiler Task: .Net MSBuild Google Closure Compiler Task 1.1: - Corrected issue with regular expression source file and renamingdotNails: dotNails_0.5.9: NOTE - the latest source code has been moved to google code to take advantage of Mercurial source control - http://code.google.com/p/dotnails/sourc...EasyWFUnit: EasyWFUnit-2.2: Release 2.2 of EasyWFUnit, an extension library to support unit testing of Windows Workflow, includes a revised WinForm GUI Test Builder that utili...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite BETA2 (for .NET 4.0RC): Includes Fluent.dll (with .pdb and .xml) and test application compiled with .NET 4.0 RC.FolderSize: FolderSize.Win32.1.0.3.0: FolderSize.Win32.1.0.3.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...Fusion Charts Free for SharePoint: 1.3: Fix release for issue #11833 : Feature Must Be Activated on Root of Web Application.GotSharp Extensions: 1.0: First release, containing only a few extension methods for the System.String and System.IO.Stream classes, and a Range utility class.Jeremy's Experimental Repository: FluentValidation with IoC Sample: Sample code for the blog post Using FluentValidation with an IoC containerMiniTwitter: 1.08: MiniTwitter 1.08 更新内容 修正 自動更新が CodePlex の変更で動いていなかった問題を修正 自動更新に失敗すると落ちるバグを修正 通知領域アイコン右クリックで表示されるメニューが消えないバグを修正 変更 ハッシュタグの抽出条件を変更 API のエンドポイ...MSTS Editors & Tools: Simis Editor v0.3: Simis Editor v0.3 Enabled Edit > Undo and Edit > Redo. Undoing/redoing back to last saved state is identified as saved (no prompt on exit, etc.)....Parallel Mandelbrot Set solver: Alpha 1: First releaseParallelTasks: ParallelTasks 2.0 beta1: ParallelTasks 2.0 is a total re-write of the original version. Featuring improved performance and stability and a more consistent API.Personal Expense Tracker: Personal Expense Tracker v0.1 beta: This is the first beta release. Please provide me with your feedback.PostCrap - flyweight .NET AOP post compiler: PostCrap 1.0 AOP source and binaries: PostCrap 1.0 source and binaries (the unit test project contains sample interceptor attributes for exception handling & logging)Protoforma | Tactica Adversa: Skilful 0.1.3.276: AlphaRawr: Rawr 2.3.10: - More improvements to the default filters - Further improvement on avoiding useless gem swaps from the Optimizer. - Normal/Heroic ICC items shou...Reusable Library: v1.0.2: A collection of reusable abstractions for enterprise application developer.Sem.Sync: 2010-02-21 - Synchronization Manager - Beta: This release is not tested very well, so you should use this version only to evaluate new features. - Changed way of handling source-ids in order ...Survey - web survey & form engine: Survey 1.1.0: Release Survey v. 1.1.0.0 Major changes: - layout & graphics completely overhauled - several technical changes & repairs (e.g. matrix question iss...Yet Another SharePoint Tool: Version 1: Version 1Zeta Resource Editor: Release 2010-02-21: New source code release.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETDotNetNuke® Community EditionMicrosoft SQL Server Community & SamplesMost Active ProjectsDinnerNow.netRawrBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSharpyjQuery Library for SharePoint Web ServicesSharePoint ContribInfoServicepatterns & practices – Enterprise LibraryPHPExcel

    Read the article

  • Parallelism in .NET – Part 6, Declarative Data Parallelism

    - by Reed
    When working with a problem that can be decomposed by data, we have a collection, and some operation being performed upon the collection.  I’ve demonstrated how this can be parallelized using the Task Parallel Library and imperative programming using imperative data parallelism via the Parallel class.  While this provides a huge step forward in terms of power and capabilities, in many cases, special care must still be given for relative common scenarios. C# 3.0 and Visual Basic 9.0 introduced a new, declarative programming model to .NET via the LINQ Project.  When working with collections, we can now write software that describes what we want to occur without having to explicitly state how the program should accomplish the task.  By taking advantage of LINQ, many operations become much shorter, more elegant, and easier to understand and maintain.  Version 4.0 of the .NET framework extends this concept into the parallel computation space by introducing Parallel LINQ. Before we delve into PLINQ, let’s begin with a short discussion of LINQ.  LINQ, the extensions to the .NET Framework which implement language integrated query, set, and transform operations, is implemented in many flavors.  For our purposes, we are interested in LINQ to Objects.  When dealing with parallelizing a routine, we typically are dealing with in-memory data storage.  More data-access oriented LINQ variants, such as LINQ to SQL and LINQ to Entities in the Entity Framework fall outside of our concern, since the parallelism there is the concern of the data base engine processing the query itself. LINQ (LINQ to Objects in particular) works by implementing a series of extension methods, most of which work on IEnumerable<T>.  The language enhancements use these extension methods to create a very concise, readable alternative to using traditional foreach statement.  For example, let’s revisit our minimum aggregation routine we wrote in Part 4: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re doing a very simple computation, but writing this in an imperative style.  This can be loosely translated to English as: Create a very large number, and save it in min Loop through each item in the collection. For every item: Perform some computation, and save the result If the computation is less than min, set min to the computation Although this is fairly easy to follow, it’s quite a few lines of code, and it requires us to read through the code, step by step, line by line, in order to understand the intention of the developer. We can rework this same statement, using LINQ: double min = collection.Min(item => item.PerformComputation()); Here, we’re after the same information.  However, this is written using a declarative programming style.  When we see this code, we’d naturally translate this to English as: Save the Min value of collection, determined via calling item.PerformComputation() That’s it – instead of multiple logical steps, we have one single, declarative request.  This makes the developer’s intentions very clear, and very easy to follow.  The system is free to implement this using whatever method required. Parallel LINQ (PLINQ) extends LINQ to Objects to support parallel operations.  This is a perfect fit in many cases when you have a problem that can be decomposed by data.  To show this, let’s again refer to our minimum aggregation routine from Part 4, but this time, let’s review our final, parallelized version: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Here, we’re doing the same computation as above, but fully parallelized.  Describing this in English becomes quite a feat: Create a very large number, and save it in min Create a temporary object we can use for locking Call Parallel.ForEach, specifying three delegates For the first delegate: Initialize a local variable to hold the local state to a very large number For the second delegate: For each item in the collection, perform some computation, save the result If the result is less than our local state, save the result in local state For the final delegate: Take a lock on our temporary object to protect our min variable Save the min of our min and local state variables Although this solves our problem, and does it in a very efficient way, we’ve created a set of code that is quite a bit more difficult to understand and maintain. PLINQ provides us with a very nice alternative.  In order to use PLINQ, we need to learn one new extension method that works on IEnumerable<T> – ParallelEnumerable.AsParallel(). That’s all we need to learn in order to use PLINQ: one single method.  We can write our minimum aggregation in PLINQ very simply: double min = collection.AsParallel().Min(item => item.PerformComputation()); By simply adding “.AsParallel()” to our LINQ to Objects query, we converted this to using PLINQ and running this computation in parallel!  This can be loosely translated into English easily, as well: Process the collection in parallel Get the Minimum value, determined by calling PerformComputation on each item Here, our intention is very clear and easy to understand.  We just want to perform the same operation we did in serial, but run it “as parallel”.  PLINQ completely extends LINQ to Objects: the entire functionality of LINQ to Objects is available.  By simply adding a call to AsParallel(), we can specify that a collection should be processed in parallel.  This is simple, safe, and incredibly useful.

    Read the article

  • 17 new features in Visual Studio 2010

    - by vik20000in
    Visual studio 2010 has been released to RTM a few days back. This release of Visual studio 2010 comes with a big number of improvements on many fronts. In this post I will try and point out some of the major improvements in Visual Studio 2010. 1)      Visual studio IDE Improvement. Visual studio IDE has been rewritten in WPF. The look and feel of the studio has been improved for improved readability. Start page has been redesigned and template so that anyone can change the start page as they wish. 2)      Multiple Monitor - Support for Multiple Monitor was already there in Visual studio. But in this edition it has been improved as much that we can now place the document, design and code window outside the IDE in another monitor. 3)      ZOOM in Code Editor – Making the editors in WPF has made significant improvement for them. The best one that I like is the ZOOM feature. We can now zoom in the code editor with the help of the ctrl + Mouse scroll. The zoom feature does not work on the Design surface or windows with icon like solution view and toolbox. 4)      Box Selection - Another Important improvement in the Visual studio 2010 is the box selection. We can select a rectangular by holding down the Alt Key and selecting with mouse.  Now in the rectangular selection we can insert text, Paste same code in different line etc. This is helpful if you want to convert a number of variables from public to private etc… 5)      New Improved Search – One of the best productivity improvements in Visual studio 2010 is its new search as you type support. This has been done in the Navigate To window which can be brought up by pressing (Ctrl + ,). The navigate To windows also take help of the Camel casing and will be able to search with the help of camel casing when character is entered in upper case. For example we can search AOH for AddOrederHeader. 6)      Call Hierarchy – This feature is only available to the Visual C# and Visual C++ editor. The call hierarchy windows displays the calls made to and from (yes both to and from) a selected method property or a constructor. The call hierarchy also shows the implementation of interface and the overrides of virtual or abstract methods. This window is very helpful in understanding the code flow, and evaluating the effect of making changes. The best part is it is available at design time and not at runtime only like a debugger. 7)      Highlighting references – One of the very cool stuff in Visual Studio 2010 is the fact if you select a variable then all the use of that variable will be highlighted alongside. This should work for all the result of symbols returned by Find all reference. This also works for Name of class, objects variable, properties and methods. We can also use the Ctrl + Shift + Down Arrow or Up Arror to move through them. 8)      Generate from usage - The Generate from usage feature lets you use classes and members before you define them. You can generate a stub for any undefined class, constructor, method, property, field, or enum that you want to use but have not yet defined. You can generate new types and members without leaving your current location in code, This minimizes interruption to your workflow.9)      IntelliSense Suggestion Mode - IntelliSense now provides two alternatives for IntelliSense statement completion, completion mode and suggestion mode. Use suggestion mode for situations where classes and members are used before they are defined. In suggestion mode, when you type in the editor and then commit the entry, the text you typed is inserted into the code. When you commit an entry in completion mode, the editor shows the entry that is highlighted on the members list. When an IntelliSense window is open, you can press CTRL+ALT+SPACEBAR to toggle between completion mode and suggestion mode. 10)   Application Lifecycle Management – A client application for management of application lifecycle like version control, work item tracking, build automation, team portal etc is available for free (this is not available for express edition.). 11)   Start Page – The start page has been redesigned with WPF for new functionality and look. Tabbed areas are provided for content from different source including MSDN. Once you open some project the start page closes automatically. The list of recent project also lets you remove project from the list. And above all the start page is customizable enough to be changed as per individual requirement. 12)   Extension Manager – Visual Studio 2010 has provided good ways to be extended. We can also use MEF to extend most of the features of Visual Studio. The new extension manager now can go the visual studio gallery and install the extension without even opening any explorer. 13)   Code snippets – Visual studio 2010 for HTML, Jscript and Asp.net also. 14)   Improved Intelligence for JavaScript has been improved vastly (around 2-5 times). Intelligence now also shows the XML documentation comment on the go. 15)   Web Deployment – Web Deployment has been vastly improved. We can package and publish the web application in one click. Three major supported deployment scenarios are Web packages, one click deployment and Web configuration Transformation. 16)   SharePoint - Visual Studio 2010 also brings vastly improved development experience for SharePoint. We can create, edit, debug, package, deploy and activate SharePoint project from within Visual Studio. Deployment of Site is as easy as hitting F5. 17)   Azure – Visual Studio 2010 also comes with handy improvement for developing on windows Azure environment. Vikram

    Read the article

  • Visual Studio 2010 Zooming – Keyboard Commands, Global Zoom

    - by Jon Galloway
    One of my favorite features in Visual Studio 2010 is zoom. It first caught my attention as a useful tool for screencasts and presentations, but after getting used to it I’m finding that it’s really useful when I’m developing – letting me zoom out to see the big picture, then zoom in to concentrate on a few lines of code. Zooming without the scroll wheel The common way you’ll see this feature demonstrated is with the mouse wheel – you hold down the control key and scroll up or down to change font size. However, I’m often using this on my laptop, which doesn’t have a mouse wheel. It turns out that there are other ways to control zooming in Visual Studio 2010. Keyboard commands You can use Control+Shift+Comma to zoom out and Control+Shift+Period to zoom in. I find it’s easier to remember these by the greater-than / less-than signs, so it’s really Control+> to zoom in and Control+< to zoom out. Like most Visual Studio commands, you can change those the keyboard buttons. In the tools menu, select Options / Keyboard, then either scroll down the list to the three View.Zoom commands or filter by typing View.Zoom into the “Show commands containing” textbox. The Scroll Dropdown If you forget the keyboard commands and you don’t have a scroll wheel, there’s a zoom menu in the text editor. I’m mostly pointing it out because I’ve been using Visual Studio 2010 for months and never noticed it until this week. It’s down in the lower left corner. Keeping Zoom In Sync Across All Tabs Zoom setting is per-tab, which is a problem if you’re cranking up your font sizes for a presentation. Fortunately there’s a great new Visual Studio Extension called Presentation Zoom. It’s a nice, simple extension that just does one thing – updates all your editor windows to keep the zoom setting in sync. It’s written by Chris Granger, a Visual Studio Program Manager, in case you’re worried about installing random extensions. See it in action Of course, if you’ve got Visual Studio 2010 installed, you’ve hopefully already been zooming like mad as you read this. If not, you can watch a 2 minute video by the Visual Studio showing it off.

    Read the article

  • More information on the Patch Tuesday updates for SQL Server

    - by AaronBertrand
    Last week, Microsoft released a series of patches for all supported versions of SQL Server (from SQL Server 2005 SP3 all the way to SQL Server 2008 R2). The reason for the patch against SQL Server installations is largely a client-side issue with the XML viewer application, and for SQL Server specifically, the exploit is limited to potential information disclosure. A very easy way to avoid exposure to this exploit is simply to never open a file with the .disco extension (these files are likely already...(read more)

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >