Search Results

Search found 21908 results on 877 pages for 'content catalog'.

Page 289/877 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • MVC: Nasty __o not declared

    - by xamlnotes
    I ran into this little error with MVC where a bunch of errors showed up about  __o  not declared. This was driving me nuts. Then I ran across this link that solved it. http://stackoverflow.com/questions/750902/how-do-i-get-rid-of-o-is-not-declared So, the solution is to put this into the top of the page like VS does for your site.master. <%-- The following line works around an ASP.NET compiler warning --%> <%: ""%> But what about other pages? Lets say you have a view that’s using your site master and that view is throwing this error. Just add the items into the content section where the error occurs like so: <asp:Content ID="indexContent" ContentPlaceHolderID="MainContent" runat="server"> <%-- The following line works around an ASP.NET compiler warning --%> <%: ""%>   Then add the rest of your code. That seems to fix it and its pretty simple too.

    Read the article

  • Delete the pendrive contents and also trash in Mac OSX?

    - by Warrior
    I am using a Macbook pro. I copied some data from my pen drive to my mac and deleted the content by moving it to trash. After that when I see the info of pen drive it give more value than the original value. If I cleaned the content of the trash only I am able to see the correct value of pen drive and able to copy data. Has Mac been designed like that or is there some other way to delete other than using the "move to trash" option? Thanks.

    Read the article

  • Will spreading your servers load not just consume more recourses

    - by Saif Bechan
    I am running a heavy real-time updating website. The amount of recourses needed per user are quite high, ill give you an example. Setup Every visit The application is php/mysql so on every visit static and dynamic content is loaded. Recourses: apache,php,mysql Every second (no more than a second will just be too long) The website needs to be updated real-time so every second there is an ajax call thats updates the website. Recourses: jQuery,apache,php,mysql Avarage spending for single user (spending one minute and visited 3 pages) Apache: +/- 63 requests / responsess serving static and dynamic content (img,css,js,html) php: +/- 63 requests / responses mysql: +/- 63 requests / responses jquery: +/- 60 requests / responses Optimization I want to optimize this process, but I think that maybe it would be just the same in the end. Before implementing and testing (which will take weeks) I wanted to have some second opinions from you guys. Every visit I want to start off with having nginx in the front and work as a proxy to deliver the static content. Recources: Dynamic: apache,php,mysql Static: nginx This will spread the load on apache a lot. Every Second For the script that loads every second I want to set up Node.js server side javascript with nginx in te front. I want to set it up that jquery makes a request ones a minute, and node.js streams the data to the client every second. Recources: jQuery,nginx,node.js,mysql Avarage spending for single user (spending one minute and visited 3 pages) Nginx: 4 requests / responsess serving mostly static conetent(img,css,js) Apache: 3 requests only the pages php: 3 requests only the pages node.js: 1 request / 60 responses jquery: 1 request / 60 responses mysql: 63 requests / responses Optimization As you can see in the optimisation the load from Apache and PHP are lifted and places on nginx and node.js. These are known for there light footprint and good performance. But I am having my doubts, because there are still 2 programs extra loaded in the memory and they consume cpu. So it it better to have less programs that do the job, or more. Before I am going to spend a lot of time setting this up I would like to know if it will be worth the while.

    Read the article

  • Javascript widgets: do links count as SEO backlinks? [closed]

    - by j0nes
    Possible Duplicate: How good is it for SEO if you have a widget that lives on other sites? On my website I offer an option to let users embed information from my site with some kind of "homepage widget". If a user wants to embed it in his website, he basically has to add one line of Javascript to his HTML files like this: <script src="http://mysite.com/myscript.php?some_options_here"></script> Inside the widget, I export some content from my website and of course create a link back to my website. This is done in Javascript with document.write. document.writeln("My great exported content"); document.writeln('<a href="http://mysite.com?ref=widget>Check mysite.com</a>'); I have Google Analytics set up to track whether the links in there get clicked, and they do. Now I am asking myself if Google recognizes these links as valid backlinks from the embedding domain. I know that Googlebot can parse and execute Javascript, but I have not found any references whether these links also count as "normal" backlinks.

    Read the article

  • SharePoint: UI for ordering list items

    - by svdoever
    SharePoint list items have in the the base Item template a field named Order. This field is not shown by default. SharePoint 2007, 2010 and 2013 have a possibility to specify the order in a UI, using the _layouts page: {SiteUrl}/_layouts/Reorder.aspx?List={ListId} In SharePoint 2010 and 2013 it is possible to add a custom action to a list. It is possible to add a custom action to order list items as follows (SharePoint 2010 description): Open SharePoint Designer 2010 Navigate to a list Select Custom Actions > List Item Menu Fill in the dialog box: Open List Settings > Advanced Settings > Content Types, and set Allow management of content types to No  On List Settings select Column Ordering This results in the following UI in the browser: Selecting the custom Order Items action (under List Tools > Items) results in: You can change your custom action in SharePoint designer. On the list screen in the bottom right corner you can find the custom action: We now need to modify the view to include the order by adding the Order field to the view, and sorting on the Order field. Problem is that the Order field is hidden. It is possible to modify the schema of the Order field to set the Hidden attribute to FALSE. If we don’t want to write code to do it and still get the required result it is also possible to modify the view through SharePoint Designer: Modify the code of the view: This results in: Note that if you change the view through the web UI these changes are partly lost. If this is a problem modify the Order field schema for the list.

    Read the article

  • SEO questions about PR, Page Structure, and Other

    - by jasondavis
    A couple of basic questions related to SEO. 1) If I have a site that has several different niches that I am trying to promote from. Example Web Developer broke down into section of Web Design, Graphic Design, Programming, Software for SEO purposes, would it be better to use subdomains for these main sections or use the main domain with a folder like structure? 2) Is PR different for each page of a domain or ever page has a PR of the same on that domain? Also do sub-domains have a different PR? 3) When entering a hugely over saturated niche such as web-design, is it even possible to compete with the big sites that have been ranked on google #1 page for years? 4) Lastly, I have read about how important titles, link anchors, and headings are for SEO and how content is the most important. So left's say we are building a standard header, body, sidebar, footer page. In the the actual markup, would it be better to make sure the main content comes before the sidebar on the page or does this probably not make a difference? 5) I seen mentioned in another answer here that microformats can help with SEO, is there any fact behind this? Thank you for any info on this

    Read the article

  • Request Removal of naked domain from Google Index

    - by Pedr
    I have a site which was temporarily available at both example.com and www.example.com. All traffic to example.com is now redirected to www.example.com, however during the brief period that the site was available at the naked domain, Google indexed it. So Google now has two versions of every page indexed: www.example.com www.example.com/about_us www.example.com/products/something ... and example.com example.com/about_us example.com/products/something ... For obvious reasons, this is a bad situation, so how can I best resolve it? Should I request removal of these pages from the index? There is still content at these URLs, but they now redirect to the www subdomain equivalent. The site has many hundreds of pages, but the only way I can see to request removal is via the Remove outdated content screen in Webmaster Tools, one URL at a time. How can I request removal of an entire domain (ie. the naked domain) without it effecting the true site located at the www subdomain? Is this the correct strategy given that all the naked domains now redirect to their www equivalent?

    Read the article

  • .htaccess ignored, SPECIFIC to EC2 - not the usual suspects

    - by tedneigerux
    I run 8-10 EC2 based web servers, so my experience is many hours, but is limited to CentOS; specifically Amazon's distribution. I'm installing Apache using yum, so therefore getting Amazon's default compilation of Apache. I want to implement canonical redirects from non-www (bare/root) domain to www.domain.com for SEO using mod_rewrite BUT MY .htaccess FILE IS CONSISTENTLY IGNORED. My troubleshooting steps (outlined below) lead me to believe it's something specific to Amazon's build of Apache. TEST CASE Launch a EC2 Instance, e.g. Amazon Linux AMI 2013.03.1 SSH to the Server Run the commands: $ sudo yum install httpd $ sudo apachectl start $ sudo vi /etc/httpd/conf/httpd.conf $ sudo apachectl restart $ sudo vi /var/www/html/.htaccess In httpd.conf I changed the following, in the DOCROOT section / scope: AllowOverride All In .htaccess, added: (EDIT, I added RewriteEngine On later) RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] Permissions on .htaccess are correct, AFAI can tell: $ ls -al /var/www/html/.htaccess -rwxrwxr-x 1 git apache 142 Jun 18 22:58 /var/www/html/.htaccess Other info: $ httpd -v Server version: Apache/2.2.24 (Unix) Server built: May 20 2013 21:12:45 $ httpd -M Loaded Modules: core_module (static) ... rewrite_module (shared) ... version_module (shared) Syntax OK EXPECTED BEHAVIOR $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 ACTUAL BEHAVIOR $ curl -I domain.com HTTP/1.1 200 OK Date: Wed, 19 Jun 2013 12:34:10 GMT Server: Apache/2.2.24 (Amazon) Connection: close Content-Type: text/html; charset=UTF-8 TROUBLESHOOTING STEPS In .htaccess, added: BLAH BLAH BLAH ERROR RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] My server threw an error 500, so I knew the .htaccess file was processed. As expected, it created an Error log entry: [Wed Jun 19 02:24:19 2013] [alert] [client XXX.XXX.XXX.XXX] /var/www/html/.htaccess: Invalid command 'BLAH BLAH BLAH ERROR', perhaps misspelled or defined by a module not included in the server configuration Since I have root access on the server, I then tried moving my rewrite rule directly to the httpd.conf file. THIS WORKED. This tells us several important things are working. $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 HOWEVER, it is bothering me that it didn't work in the .htaccess file. And I have other use cases where I need it to work in .htaccess (e.g. an EC2 instance with named virtual hosts). Thank you in advance for your help.

    Read the article

  • File Activation in Windows RT

    - by jdanforth
    The code sample for file activation on MSDN is lacking some code so a simple way to pass the file clicked to your MainPage could be: protected override void OnFileActivated(FileActivatedEventArgs args) {     var page = new Frame();     page.Navigate(typeof(MainPage));     Window.Current.Content = page;       var p = page.Content as MainPage;     if (p != null) p.FileEvent = args;     Window.Current.Activate(); } And in MainPage: public MainPage() {     InitializeComponent();     Loaded += MainPageLoaded; } void MainPageLoaded(object sender, RoutedEventArgs e) {     if (FileEvent != null && FileEvent.Files.Count > 0)     {         //… do something with file     } }

    Read the article

  • Redirection & SEO related stuff while moving to a new blog

    - by Karshim Kanwar
    I have a WordPress blog and recently I have setup a new blog lets call the old blog as blog old and new blog as blog new. What I did is moved the content, photos, pictures and all 250 posts from blog old to blog new. Both the blog name are changed as they are pointing to different domain names! I read helpful things in this site itself at here. I will no longer use blog old, moreover I am concerned about the SEO of the blog new. The blog new is fairly new (just 24 hours and no pages have been indexed in Google). I have done the following stuff: Deleted all the post share at Facebook fan Page, Twitter profile, Google+ page and Finally deleted the fan page/Twitter, Google+ page. Edited the link backs of old blog in the blog new. The question I have is: How do I prevent duplicate content issues? Do I go straightaway and delete all the posts in blog old? Should I start sharing the blog posts in blog new? Should I submit the new site to Webmaster Tools or wait for few weeks? Every comment here is appreciated! What issues can I face relating to SEO?

    Read the article

  • 'Buy the app' landing page implementations

    - by benwad
    My site (using Django) has an app that I'm trying to push - I currently have a piece of middleware that redirects the user to a page advertising the app if they're accessing the page on the iPhone, then setting a cookie so that the user isn't bugged by the message every time they visit the site. This works fine, however checking the page with the mobile Googlebot checker shows that the Googlebot gets stuck in the redirect (since it doesn't store cookies) and therefore won't index the proper content. So, I'm trying to think of an alternative implementation that won't hurt the site's Google ranking and won't have any other adverse effects. I've considered a couple of options: Redirect (the current solution), but don't redirect if the user agent matches the Googlebot's UA string. This would be ideal, however I'm not sure if Google like their bot being treated differently from other users, and I'm afraid the site's ranking may be somehow penalised if I go ahead with this. Use a Javascript popup instead of a redirect. This would make sure the Googlebot finds the content it needs, however I envision this approach causing compatibility issues with the myriad mobile devices/browsers out there, and may affect the page load time. How valid are these options? And is there a better option for implementing this feature out there? I've tried researching this topic but surprisingly can't find any reputable-looking blog posts that explore this topic. EDIT: I posted this on SF because it seemed unsuitable for SO, but if there's another site that would be better for this issue then I'd be happy to move the question elsewhere.

    Read the article

  • Browser Statistics for Geekswithblogs.net

    - by Jeff Julian
    I love Google Analytics!  It helps me so much during my day-to-day maintenance of Geekswithblogs.net and our other sites.  I can see so much data about our visitors and come up with new ways of delivering more content to our readers so they can really get the most out of our community.  Browsers and Browser Versions is a big indicator for me to help decide what we can support and what we need to be testing with.  The clear browsers of choice right now are Chrome, IE, and Firefox taking up 94.1%.  The next browser is Safari at 2.71%.  What this really brings to my attention besides I better test well with Chrome, Firefox, and IE is that we are definitely missing an opportunity with Mobile devices.  We really need to kick up the heat when it comes to a mobile presence with Geekswithblogs.net as a community and the blogs that are on this site.  We need easy discovery of new content and easy tracking of what I like.  I am definitely on mission to make this happen and it will be a phased approach, but I want to see these numbers changes since most of us have 2 or 3 mobile devices we use for Social activities, but tools are lacking for interacting with technical data besides RSS readers. Technorati Tags: Mobile,Geekswithblogs.net,Browsers

    Read the article

  • New Marketing Kits Available

    - by Cinzia Mascanzoni
    New marketing kits are available on the OPN portal. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Oracle Optimized DataCenter Oracle Storage for Oracle Database and Engineered Systems StorageTek SL150 - New Scalable Storage Solutions for Growing Businesses Extreme Database Performance meets Its Backup and Recovery Match with Oracle's Sun ZFS Backup Appliance Maximize Value and Business Agility through Data Center Virtualization Be A Content King with Oracle WebCenter Content

    Read the article

  • Cannot submit change of address to subdomain in Google Webmaster Tools?

    - by RCNeil
    I am pointing several domains to one URL, a URL which happens to include a subdomain. ALL of the domains are using 301 redirects to point to this new address. One of the older domains (which used to be a site) is a 'property' in Webmaster Tools, as is the new site (the one with the subdomain.) When registering a 'Change of Address' for the old site with WebmasterTools, it suggests the following method - Set up your content on your new domain. (done) Redirect content from your old site using 301 redirects. (done) Add and verify your new site to Webmaster Tools. (done) Then, directly below that, to proceed, it says Tell us the URL of your new domain: Your account doesn't contain any sites we can use for a change of address. Add and verify the new site, then try again. I have already submitted and verified the new site. The only reason I can fathom I am getting this error is because the new site includes a subdomain. Although I don't foresee getting punished for this, as I am correctly 301 redirecting traffic anyway, I'm curious as to why the Change of Address submission isn't working appropriately for me. Has anyone else had experience with this?

    Read the article

  • How to quickly search through a very large list of strings / records on a database

    - by Giorgio
    I have the following problem: I have a database containing more than 2 million records. Each record has a string field X and I want to display a list of records for which field X contains a certain string. Each record is about 500 bytes in size. To make it more concrete: in the GUI of my application I have a text field where I can enter a string. Above the text field I have a table displaying the (first N, e.g. 100) records that match the string in the text field. When I type or delete one character in the text field, the table content must be updated on the fly. I wonder if there is an efficient way of doing this using appropriate index structures and / or caching. As explained above, I only want to display the first N items that match the query. Therefore, for N small enough, it should not be a big issue loading the matching items from the database. Besides, caching items in main memory can make retrieval faster. I think the main problem is how to find the matching items quickly, given the pattern string. Can I rely on some DBMS facilities, or do I have to build some in-memory index myself? Any ideas? EDIT I have run a first experiment. I have split the records into different text files (at most 200 records per file) and put the files in different directories (I used the content of one data field to determine the directory tree). I end up with about 50000 files in about 40000 directories. I have then run Lucene to index the files. Searching for a string with the Lucene demo program is pretty fast. Splitting and indexing took a few minutes: this is totally acceptable for me because it is a static data set that I want to query. The next step is to integrate Lucene in the main program and use the hits returned by Lucene to load the relevant records into main memory.

    Read the article

  • New: Online NetBeans 8 Crash Course

    - by Geertjan
    On Twitter today I came across an announcement for a brand new on-line course in NetBeans 8. Since NetBeans 8 has been released during the past few months, the course is really very new. Go here to get there directly: https://www.video2brain.com/de/videotraining/netbeans-ide-8-0-crashkurs Here's the general idea. As you can see, the course is in German. With my basic understanding of German, I've had no problem in following the course. The trainer speaks clearly and slowly and everything is very well structured. The course covers all the basics of NetBeans IDE. From getting set up to using all the key features. The quality of the videos is great and the content is clear and informative. Once you've bought the course, all the lessons are unlocked. As you can see, they're all quite short and there's really a lot of content, didn't all fit into the screenshot: Quite some work must have gone into this. Here's one of the free lessons in the course, to give an idea of what you'll get: https://www.video2brain.com/de/tutorial/texte-internationalisieren This one is also free: https://www.video2brain.com/de/tutorial/eclipse-projekt-importieren I highly recommend this course especially if you're switching, or thinking about switching, from a different IDE and want to get a thorough overview of all the features that NetBeans IDE provides. Everything in the course is done within NetBeans, which means no slides, just code. You get to see the workflow of all the standard tasks and, for these purposes, the course does a really great job.

    Read the article

  • XML Rules Engine and Validation Tutorial with NIEM

    - by drrwebber
    Our new XML Validation Framework tutorial video is now available. See how to easily integrate code-free adaptive XML validation services into your web services using the Java CAMV validation engine. CAMV allows you to build fault tolerant content checking with XPath that optionally use SQL data lookups. This can provide warnings as well as error conditions to tailor your validation layer to exactly meet your business application needs. Also available is developing test suites using Apache ANT scripting of validations.  This allows a community to share sets of conformance checking test and tools . On the technical XML side the video introduces XPath validation rules and illustrates and the concepts of XML content and structure validation. CAM validation templates allow contextual parameter driven dynamic validation services to be implemented compared to using a static and brittle XSD schema approach.The SQL table lookup and code list validation are discussed and examples presented.Features are highlighted along with a demonstration of the interactive generation of actual live XML data from a SQL data store and then validation processing complete with errors and warnings detection.The presentation provides a primer for developing web service XML validation and integration into a SOA approach along with examples and resources. Also alignment with the NIEM IEPD process for interoperable information exchanges is discussed along with NIEM rules services.The CAMV engine is a high performance scalable Java component for rapidly implementing code-free validation services and methods. CAMV is a next generation WYSIWYG approach that builds from older Schematron coding based interpretative runtime tools and provides a simpler declarative metaphor for rules definition. See: http://www.youtube.com/user/TheCAMeditor

    Read the article

  • Remove DRM From a WMV file I own

    - by Rev
    Alright, first, let me explain. I purchased some content through Microsoft's Zune/Xbox Video service, and man that was a mistake. After trying several times to get the video to play, I received an error along the lines of "out of licenses." Lucky for me, I was able to recover the file I was looking for off of a backup, but now I'm having problems playing it. It works fine in Windows Media Player, but not in Zune or Xbox Video (on Windows 8). I contacted Zune's customer support, and of course they couldn't help me. So, I legally own the content, I just want to be able to play it where I want to play it. It's ridiculous that I can't. I know there are ways to do this out there, I just can't figure it out (I keep getting directed to this piece of junk thing called Almedia, which kind of seems to work, but was only putting out audio in the demo version). Thanks!

    Read the article

  • Unable to create suitable graphics device?

    - by kraze
    I've been following the Eye of the Dragon tutorial, which is basicly a guide to making a 2D RPG game, obviously. I recently finished the tutorial about making pop up screens in the menu and changed the screen to load as a full screen. Whenever I try and load the game it just goes black and my mouse sits there. I cannot change out of it other then with CtrlAltDel. Once i do that it says No suitable graphics card found, unable to create graphics device. I read somewhere about XNA not allowing more then one screen when any one of them is full screen. but it wasnt very informative. Anyone have any ideas whats going on and/or how to fix this? Just incase if this helps this is the code for the graphics device: public Game1() { graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = 900; graphics.PreferredBackBufferHeight = 768; graphics.IsFullScreen = true; this.Window.Title = "Eyes of the Dragon"; Content.RootDirectory = "Content"; }

    Read the article

  • IIS7.5 is only resolving to the Default Web Site

    - by Dennis Burnham
    I am able to access only the Default Web Site on a Windows 2008 R2 Server which is running IIS 7.5 The Default Web Site's binding is to "All Unassigned", same way as I have done it on a different machine running IIS6 under Windows 2003 Server. The bindings of the desired Web Site have the IP address of the server and the correct home directory. Regardless what I do, the only page content I can see is the default index page in the wwwroot directory which is the Default Web Site. What must I do to deliver the correct content from the Web Sites that are configured in IIS7.5?

    Read the article

  • Strange PHP output buffering

    - by radek-k
    PHP: header('Content-type: text/plain'); for ($i=0; $i<10; $i++){ echo "$i\r\n"; ob_flush(); flush(); sleep(1); } I tried script above on 2 different servers. Both respond numbers 0...9 in every line. In case of first server each number is received every second. In case of second server there is no output for 10 seconds and entire output is displayed at once. What might be wrong int second case? I tried various uutput control Functions but it didn't help. Set of response headers in both cases is pretty much the same: HTTP/1.1 200 OK Date: Mon, 03 Jan 2011 19:21:21 GMT Server: Apache X-Powered-By: PHP/5.2.14 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/plain

    Read the article

  • how to copy the results from a grep command to the bash clipboard?

    - by avilella
    If I type something in a Linux bash terminal with no X, and then use Ctrl+u, whatever I typed is stored in the bash "clipboard" (for lack of a better term), and I can type it again doing Ctrl+y. How can I copy the results from a grep command on a text file to such bash clipboard? For example, if I have an INSTALL file like this: ./installprocedure --do-some-long-and-complicated-operation-on-dir dir1 How can I copy the content of a grep command so that it's available doing Ctrl+y? For example: copy content to bash clipboard "grep installprocedure INSTALL" Ctrl+y ./installprocedure --do-some-long-and-complicated-operation-on-dir dir1 #cursor available here

    Read the article

  • Display "iftop" on web

    - by DmitrySemenov
    I do iftop -i eth1 > out.txt It does produce the file with "encrypted" UI content such as [(B[)0[[1;80r[[mO[[?7h[[?1h[=[[H[[J[[0;7mO Listening on eth1 [[1;48H[[mO12.5Kb Is it possible to display this as a web xhtml output somehow? cat out.txt on my console does produce a normal iftop window but when I do the same thing over the web I get the content above. I understand that it is "managed" on kernel level. Is the task that I want to perform possible?

    Read the article

  • Is Protune for video only or may be used for photo too?

    - by Green
    I have Hero3+ Black Edition. I can't understand if Protune is for video only or may be applied for photos? The manual says it is both for video and for photo (page 35): High-Quality Image Capture Protune’s high data rate captures images with less compression, giving content creators higher quality for professional productions. Film/TV Rate Standard While shooting in Protune, you have the option of recording video in cinema quality 24 fps to easily intercut GoPro content with other source media without the need to perform fps conversion. But at the same time their site says that Protune is for video only: To record Protune footage, you’ll need to turn Protune ON in your camera’s settings menu. What for is Protune? Photo? Video? Or both?

    Read the article

  • Welcome to the Java Training Beat!

    - by tmcginn
    We are a group of dedicated training developers for Java, located in the US, India, and now Mexico. In this blog we will announce new training content and events that might be of interest to our readers. In this first installment of the Java Training Beat, I would like to introduce three new Oracle By Example (OBE) modules I recently released and posted to the Oracle Online Learning Library. Creating a Simple Java Message Service (JMS) Producer with NetBeans and GlassFish - covers how to create a simple text message producer with NetBeans 7 and GlassFish. Creating Java Message Service (JMS) Resources in WebLogic Server 12c - covers how to create JMS resources using the console and WebLogic Server 12c. With this tutorial, you can replicate the results of the first tutorial in WebLogic. Creating a Publish/Subscribe Model with Message-Driven Beans and GlassFish Server - covers how to create a publish/subscribe application using JMS. This tutorial includes a short case study that includes a JSF front-end application that sends a hotel reservation request object to the server as a MapMessage. Hope you find these useful!  And do check out the Online Learning Library - we have a wide range of additional content posted and more being added every month!

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >