Search Results

Search found 10677 results on 428 pages for 'mod status'.

Page 278/428 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • NTP service, offset increasing after sync

    - by Ajay
    I have installed Ubuntu 12.10 version on my PC. I am running NTP service having NTP server as GPS. I found that when we start NTP service by ntp start command, PC is able to sync with GPS as i get '*' symbol before GPS IP when i run ntpq -p command. This remains good for some time and then the * symbol is removed which means that PC is not synchronized to that server. Now, by running command ntpq -p it shows that all parameter are OK but as '*' is removed, slowly offset goes on increasing. remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 7 16 1 2.333 23.799 0.808 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 14 16 3 2.333 23.799 0.879 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 11 16 7 2.333 23.799 1.500 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 8 16 17 2.333 23.799 2.177 below are the last 4 ntp status when sync is lost with GPS ============================================================================== 192.168.100.33 .GPS. 1 u 1 16 377 2.404 1169.94 1.735 remote refid st t when poll reach delay offset jitter ============================================================================== 192.168.100.33 .GPS. 1 u - 16 377 2.513 1171.80 0.898 remote refid st t when poll reach delay offset jitter ============================================================================== 192.168.100.33 .GPS. 1 u 15 16 377 2.513 1171.80 0.898 Since, GPS is already available, PC never re-synchronize itself to GPS later ON. I have to restart the ntp service and then PC synchronizes to GPS and '*' symbol arrives.

    Read the article

  • iPhone 4S Costs 50k In India. Heck! Rather I Buy Tata Nano Car For Twice The Money

    - by Gopinath
    Are you waiting to buy iPhone 4S in India? Stop waiting and start looking for alternatives as its going to be released in India with mind blowing price tags. A 16 GB iPhone 4S costs Rs. 44,500 + tax, 32 GB at 50,900 and the 64 GB..wait! Are you really interested to know the price? I’m not at all. Its ridiculous to spend 50,000 for a mobile phone in India. I hope majority of Indians agree with me. The Tata Nano, the world’s cheapest car, costs close to the double the price of iPhone 4S. Instead of buying an iPhone 4S for around 50K, it’s a wiser decision to buy a Tata Nano. Will the super rich of India afford to pay around 50,000 to own an iPhone 4S? I think they love to own it to show off their status but I guess they prefer to get it from US through their friends and relatives. In USA an unlocked iPhone 4S available through Apple Online Store costs just 33,500(~ 650 USD IN INR) and that is a straight away Rs. 11,000 discount. Why would the rich burn money? Airtel and Aircel has announced that the iPhone 4S is going to be available in their networks from November 25 onwards and both the operators started accepting the pre-orders. If you are really willing to burn your cash go ahead and book an iPhone 4S. This article titled,iPhone 4S Costs 50k In India. Heck! Rather I Buy Tata Nano Car For Twice The Money, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How do I make changes to /proc/acpi/wakeup permanent?

    - by Jolan
    I had a problem with my Ubuntu 12.04 waking up immediately after going into suspend. I solved the problem by changing the settings in /proc/acpi/wakeup, as suggested in this question: How do I prevent immediate wake up from suspend?. After changing the settings, the system goes flawlessly into suspend and stays suspended, but after I wake it back up, the settings in /proc/acpi/wakeup are different from what I set them to. Before going to suspend: cat /proc/acpi/wakeup Device S-state Status Sysfs node SMB0 S4 *disabled pci:0000:00:03.2 PBB0 S4 *disabled pci:0000:00:09.0 HDAC S4 *disabled pci:0000:00:08.0 XVR0 S4 *disabled pci:0000:00:0c.0 XVR1 S4 *disabled P0P5 S4 *disabled P0P6 S4 *disabled pci:0000:00:15.0 GLAN S4 *enabled pci:0000:03:00.0 P0P7 S4 *disabled pci:0000:00:16.0 P0P8 S4 *disabled P0P9 S4 *disabled USB0 S3 *disabled pci:0000:00:04.0 USB2 S3 *disabled pci:0000:00:04.1 US15 S3 *disabled pci:0000:00:06.0 US12 S3 *disabled pci:0000:00:06.1 PWRB S4 *enabled SLPB S4 *enabled I tell the system to suspend, and it works as it should. But later after waking it up, the settings are changed to either: USB0 S3 *disabled pci:0000:00:04.0 USB2 S3 *enabled pci:0000:00:04.1 US15 S3 *disabled pci:0000:00:06.0 US12 S3 *enabled pci:0000:00:06.1 or USB0 S3 *enabled pci:0000:00:04.0 USB2 S3 *enabled pci:0000:00:04.1 US15 S3 *enabled pci:0000:00:06.0 US12 S3 *enabled pci:0000:00:06.1 Any ideas? Thank you for your response. Unfortunately it did not solve my problem. all of /sys/bus/usb/devices/usb1/power/wakeup /sys/bus/usb/devices/usb2/power/wakeup /sys/bus/usb/devices/usb3/power/wakeup /sys/bus/usb/devices/usb4/power/wakeup as well as /sys/bus/usb/devices/3-1/power/wakeup are set to disabled, and the notebook still wakes up by itself right after going to sleep. The only thing it seems to react to are the settings in /proc/acpi/wakeup, which keep changing (resetting) every time i power off/restart my notebook.

    Read the article

  • Big Success for the 2012 edition of the Oracle EMEA Cloud CRM Partner Community Forum!

    - by Richard Lefebvre
    The 2012 edition of the Oracle EMEA Cloud Partner Community Forum took place on march 28&29 in Madrid. 100 participants from all over Europe had a chance to interact with the Oracle Cloud CRM Product Management team about multiple subjects such as Oracle Cloud CRM and Social Network solutions strategy, RightNow acquisition, Fusion CRM Business opportunities for partners, etc. During his opening keynote, Anthony Lye (Oracle Senior Vice-President and head of Oracle CRM) presented the current Fusion CRM business status, disclosed the overall Oracle CRM product strategy and responded to many questions from the audience. Later on that day, 8 Oracle ISV's presented their Oracle Cloud CRM add-on's and highlighted the value that System Integrators can benefit from as part of a Cloud CRM project. After a very friendly networking diner in a Spanish restaurant, the second day was dedicated to Fusion CRM, with a deeper dive into its major components (Sales, Sales Planning, Marketing) including the Fusion Composers. Briefings on Oracle Consulting Services Fusion CRM dedicated offerings and Fusion CRM Partner Programs concluded the day and the event. All participants rated the event as "good" to "Excellent" and mentioned that it was meaningful for them to plan their Oracle Cloud CRM based business in the near future. We look forward to organise a similar event next year and to welcome even more partners! Richard Lefebvre

    Read the article

  • From Sea to Shining Fusion HCM Specialization

    - by Kristin Rose
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Well, the polls have closed, the votes are in and Oracle Fusion HCM Specialization is finally here! Not only is this Specialization easily achievable, partners are already seeing the “economic” value in it. But don’t just take our word for it, watch below as Oracle Diamond Partner, Infosys, shares their experience with Oracle Fusion HCM and all the success they’ve already seen! Here is how you can make a change and get started today: STEP 1: Join OPN STEP 2: Join Knowledge Zone STEP 3: Check Business and Competency Criteria STEP 4: Track Competency Status STEP 5: Apply Now So let’s put our differences aside, put Oracle Fusion first, and come together by learning more about this Oracle Fusion HCM Specialization.  We are OPN and we approve this message, The OPN Communications Team

    Read the article

  • Take Control of Workflow with Workflow Analyzer!

    - by user793553
    Take Control of Workflow with Workflow Analyzer! Immediate Analysis and Output of your EBS Workflow Environment The EBS Workflow Analyzer is a script that reviews the current Workflow Footprint, analyzes the configurations, environment, providing feedback, and recommendations on Best Practices and areas of concern. Go to Doc ID 1369938.1  for more details and script download with a short overview video on it. Proactive Benefits: Immediate Analysis and Output of Workflow Environment Identifies Aged Records Identifies Workflow Errors & Volumes Identifies looping Workflow items and stuck activities Identifies Workflow System Setup and configurations Identifies and Recommends Workflow Best Practices Easy To Add Tool for regular Workflow Maintenance Execute Analysis anytime to compare trending from past outputs The Workflow Analyzer presents key details in an easy to review graphical manner.   See the examples below. Workflow Runtime Data Table Gauge The Workflow Runtime Data Table Gauge will show critical (red), bad (yellow) and good (green) depending on the number of workflow items (WF_ITEMS).   Workflow Error Notifications Pie Chart A pie chart shows the workflow error notification types.   Workflow Runtime Table Footprint Bar Chart A pie chart shows the workflow error notification types and a bar chart shows the workflow runtime table footprint.   The analyzer also gives detailed listings of setups and configurations. As an example the workflow services are listed along with their status for review:   The analyzer draws attention to key details with yellow and red boxes highlighting areas of review:   You can extend on any query by reviewing the SQL Script and then running it on your own or making modifications for your own needs:     Find more details in these notes: Doc ID 1369938.1 Workflow Analyzer script for E-Business Suite Worklfow Monitoring and Maintenance Doc ID 1425053.1 How to run EBS Workflow Analyzer Tool as a Concurrent Request Or visit the My Oracle Support EBS - Core Workflow Community  

    Read the article

  • Java Spotlight Episode 107: Adam Bien on JavaEE Patterns and Futures @AdamBien

    - by Roger Brinkley
    Interview with Adam Bien, Java Champion and Ace Director, on his book Real World Java EE Patterns-Rethinking Best Practices and Java EE futures. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News NightHacking Tour Continues - Don't Miss It! JavaFX Ensemble in the Mac App Store12 Announcing the JavaFX UI controls sandbox Java EE 7 Status Update - November 2012 2012 Executive Committee (EC) Elections Events Nov 5-9, Øredev Developer Conference, Malmö, Sweden Nov 13-17, Devoxx, Antwerp, Belgium Nov 20-22, DOAG 2012, Nuremberg, Germany Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Dec 14-15, IndicThreads, Pune, India Feature InterviewAdam Bien is a Java Champion, NetBeans Dream Team Founding Member, Oracle ACE Director, Java Developer of the Year 2010. He has worked with Java since JDK 1.0, with Servlets/EJB since 1.0. He participates in the JCP as an Expert Group member for the Java EE 6 and 7, EJB 3.X, JAX-RS, CDI, and JPA 2.X JSRs. The author of several books about JavaFX, J2EE, and Java EE, including Real World Java EE Patterns—Rethinking Best Practices and Real World Java EE Night Hacks—Dissecting the Business Tier.The Kindle version of Real World Java EE Patterns-Rethinking Best Practices was released October 31. It’s only $9.99, but if you are an Amazon Prime members you can “borrow” the book for free. What’s Cool Building OpenJFX 2.2 Again

    Read the article

  • In Google webmaster tools, can a "soft 404" be triggered by the text on the page?

    - by Stephen Ostermiller
    I just ran across an error in Google Webmaster Tools that I have never seen before. I manage the website for my local community band (I play trombone). One of the pages on the site is a list of our upcoming performances. It is powered by a WordPress events plugin that uses a database of upcoming events that are entered through the administration interface. We just finished up our summer and fall concerts and our next performance will be our Christmas concert. I hadn't gotten around to adding that into the website yet, so there are no upcoming events shown on the page. In fact the text on the page says: No upcoming events listed under Performance. Check out past events for this category or view the full calendar. Then in Google Webmaster Tools, this page is showing up as a "soft 404": The page is returning a 200 status and Google is indicating that he 404 is "soft". I wouldn't have expected Googlebot to be as sophisticated to parse that particular sentence. Is Googlebot able to detect that the text on the page indicates that there is currently not content and then treat it as a 404 page because of that? If Google is treating this page as a soft 404 because of the text on the page, does that mean that like regular 404 pages, the page won't show up in search results?

    Read the article

  • Why does XFBML work everywhere but in Chrome?

    - by Andrei
    I try to add simple Like button to my Facebook Canvas app (iframe). The button (and all other XFBML elements) works in Safari, Firefox, Opera, but in Google Chrome. How can I find the problem? EDIT1: This is ERB-layout in my Rails app <html xmlns:fb='http://www.facebook.com/2008/fbml' xmlns='http://www.w3.org/1999/xhtml'> ... <body> ... <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({ appId: '<%= @app_id %>', status: true, cookie: true, xfbml: true }); FB.XFBML.parse(); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js#appId=<%=@app_id%>&amp;amp;xfbml=1'; document.getElementById('fb-root').appendChild(e); }()); FB.XFBML.parse(); </script> <fb:like></fb:like> ... JS error message in Chrome inspector: Uncaught ReferenceError: FB is not defined (anonymous function) Uncaught TypeError: Cannot call method 'appendChild' of null window (anonymous function) Probably similar to http://forum.developers.facebook.net/viewtopic.php?id=84684

    Read the article

  • Major Google not follow increase since introducing 301 to site

    - by jakob
    Recently we implemented Varnish in front of our web nodes so that the backend would get some rest from time to time. Since varnish is case sensitive and our app was not we implemented a 301 in varnish to redirect to small case. Example: You search for PlumBer StockHOLM you will get a 301 redirect to plumber stockholm and then plumber stockholm will be cached. This worked as a charm, but when checking the Google webmaster tools we suddenly got a crazy amount of Status - Not able to follow errors. As you can see in the image below: This of course stirred up some panic and I started to read up on the documentation once again. If I pressed on one of the links I got to the help section where i found this: Well this is strange, but as the day progressed more and more errors were thrown by Google. We took the decision to make varnish return 200 instead of the 301. Now when testing the links that appears in the Not able to follow section I get a 200 back. I have tested with Chrome, curl and lynx reader and everything looks ok but the amount of errors are still increasing. What is a little bit comforting is that the links that appears in the Not able to follow section are dated before the 200 change in varnish. Why do I get these errors and why do they keep increasing? Did google release something new on October 31? Maybe I do not understand the docs correctly?

    Read the article

  • Demantra USA Based Companies and SOX Compliance

    - by user702295
    A USA based company is assessing Demantra Trade Promotion Management (TPM) capability.  It appears that SOX is necessary in their case due to the nature of what TPM does and the necessity for auditability.  Do we have any detail on SOX compliance for Demantra? Answser ------- SOX compliance with regards to IT: 1.  Requires auditing of data changes done by who, what, when     a. Audit trail profiles can be set up for key financial series and view them in audit trail reports     b. One functionality we do not have which typically is asked for is user login history. We have only        active sessions, history is not available. 2.  Segregation of duties     a. With respect to TPM, you could have deduction and financial analyst for settlement be different        from promotion creator, promotion approver or sales team.     b. Budget Approver for funds can be different from funds consumer.     c. Promotion creator can be different than promotion approver     d. For a US customer you may have to write some custom scripts to capture promotion status change        and produce an external report as part of compliance. One additional requirement is transparency of forward commitments entered into with retailers / distributors for trade spending, promotions.  Outside of Demantra - Consumer Goods Trade Funds Analytics.

    Read the article

  • MCSE and MCSA makes a return to the world of certification..... but not as you know it.

    - by Testas
    Quick announcementMicrosoft Learning today announced the certification tracks for the upcoming SQL Server 2012 exams.You begin by acheiving the MCSA - Microsoft Certified Solutions Associate (Not to be confused by the old Microsoft Certified System Administrator)If you are starting out this includes taking the following three exams:Exam 70-461: Querying Microsoft SQL Server 2012Exam 70-462: Administering Microsoft SQL Server 2012 DatabasesExam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012If you have an MCTS in SQL Server 2008 already you can take the following pathA pass in a SQL Server 2008 (MCTS) Microsoft Certified Technology Specialist examExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Once you have achieved you MCSA status you can then start for your MCSE - Microsoft Certified Solutions Expert certificationYou have a choice, to do the MCSE: SQL Server 2012 Data Platform, MCSE: SQL Server 2012 Business Intelligence or you could do bothMCSE: SQL Server 2012 Data Platform involvesObtain your SQL Server 2012 MCSAExam 70-464: Developing Microsoft SQL Server 2012 DatabasesExam 70-465: Designing Database Solutions for Microsoft SQL Server 2012There is also an upgrade pathA pass in a SQL Server 2008 (MCITP) Microsoft Certified IT Professional Database Administrator or Database Developer CertificationExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Exam 70-459: transisitioning your MCITP on SQL Server 2008 Database Administrator or Database Developer to MCSE:Data PlatformMCSE: SQL Server 2012 Business Intelligence involvesObtain your SQL Server 2012 MCSAExam 70-466: Implementing Data Models and Reports with Microsoft SQL Server 2012Exam 70-467: Designing Business Intelligence Solutions with Microsoft SQL Server 2012The upgrade path involves:A pass in a SQL Server 2008 (MCITP) Microsoft Certified IT Professional Business Intelligence CertificationExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Exam 70-460: transisitioning your MCITP on SQL Server 2008 Business Intelligence Developer to MCSE:Business IntelligenceAs a result if you want to achieve the MCSE in either Data Platform or Business Intelligence and you are starting from scratch there will be 5 exams to takeIf you have the ability to upgrade your certification because you have an MCITP already then it will be three examsFull details and questions can be found at http://www.microsoft.com/learning/en/us/certification/cert-sql-server.aspxThanksChris

    Read the article

  • Wake On Lan (WOL) for Realtek RTL8101E/RTL8102E

    - by Heisennberg
    I'm unsuccessfully trying to get Wake on Lan to work with my local server (IP Address : 192.168.0.2, distro Ubuntu 12.04.3 LTS) which has a Realtek RTL8101E/RTL8102E ethernet card. The computer sending the WOL is a Macbook Pro which is connected on the same network. Yet the server fails to start. Here what I have done so far : name@serverName ~ $ cat /proc/acpi/wakeup Device S-state Status Sysfs node HDEF S3 *disabled pci:0000:00:1b.0 PXSX S3 *disabled PXSX S0 *enabled pci:0000:04:00.0 PXSX S0 *disabled USB1 S3 *enabled pci:0000:00:1d.0 USB2 S3 *enabled pci:0000:00:1d.1 USB3 S3 *enabled pci:0000:00:1d.2 USB5 S3 *enabled pci:0000:00:1a.1 EHC1 S3 *enabled pci:0000:00:1d.7 EHC2 S3 *enabled pci:0000:00:1a.7 name@serverName ~ $ lspci ------ 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 01) ------ name@serverName ~ $ sudo ethtool eth0 Settings for eth0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Advertised pause frame use: Symmetric Receive-only Advertised auto-negotiation: Yes Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Link partner advertised pause frame use: Symmetric Receive-only Link partner advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: MII PHYAD: 0 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000033 (51) drv probe ifdown ifup Link detected: yes and I'm calling the WOL with : name@serverName ~ $ wakeonlan xx:xx:xx:xx:xx` Sending magic packet to 255.255.255.255:9 with xx:xx:xx:xx:xx I have succesfully activated the WOL option in my computer BIOS. Any idea ?

    Read the article

  • Cannot enable wireless on an Intel WifiLink 1000 on an Lenovo Ideapad z570

    - by Brij
    I am using the ubuntu 11.10 on lenovo ideapad z570. My wireless internet is not working. I have ensure that wireless switch is on. Windows 7, wireless works great.However ubuntu 11.10 is not allowing me to enable wireless connection. I have run the following command and here is the status. sudo lshw -class network *-network DISABLED description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 00 serial: 74:e5:0b:1c:a4:a4 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-12-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:42 memory:d0500000-d0501fff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 05 serial: f0:de:f1:64:b6:62 size: 10Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=rtl_nic/rtl8105e-1.fw latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:41 ioport:2000(size=256) memory:d0404000-d0404fff memory:d0400000-d0403fff Here is rfkill list all output: rfkill list all 0: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: acer-wireless: Wireless LAN Soft blocked: yes Hard blocked: no Note : Windows 7, wireless card property shows that Intel WifiLink 1000 BGN. Could someone help me to fix this issue.

    Read the article

  • Windows Azure Evolution - Web Sites (aka Antares) Part 1

    - by Shaun
    This is the 3rd post of my Windows Azure Evolution series, focus on the new features and enhancement which was alone with the Windows Azure Platform Upgrade June 2012, announced at the MEET Windows Azure event on 7th June. In the first post I introduced the new preview developer portal and how to works for the existing features such as cloud services, storages and SQL databases. In the second one I talked about the Windows Azure .NET SDK 1.7 on the latest Visual Studio 2012 RC on Windows 8. From this one I will begin to introduce some new features. Now let’s have a look on the first one of them, Windows Azure Web Sites.   Overview Windows Azure Web Sites (WAWS), as known as Antares, was a new feature still in preview stage in this upgrade. It allows people to quickly and easily deploy websites to a highly scalable cloud environment, uses the languages and open source apps of the choice then deploy such as FTP, Git and TFS. It also can be integrated with Windows Azure services like SQL Database, Caching, CDN and Storage easily. After read its introduction we may have a question: since we can deploy a website from both cloud service web role and web site, what’s the different between them? So, let’s have a quick compare.   CLOUD SERVICE WEB SITE OS Windows Server Windows Server Virtualization Windows Azure Virtual Machine Windows Azure Virtual Machine Host IIS IIS Platform ASP.NET WebForm, ASP.NET MVC, WCF ASP.NET WebForm, ASP.NET MVC, PHP Language C#, VB.NET C#, VB.NET, PHP Database SQL Database SQL Database, MySQL Architecture Multi layered, background worker, message queuing, etc.. Simple website with backend database. VS Project Windows Azure Cloud Service ASP.NET Web Form, ASP.NET MVC, etc.. Out-of-box Gallery (none) Drupal, DotNetNuke, WordPress, etc.. Deployment Package upload, Visual Studio publish FTP, Git, TFS, WebMatrix Compute Mode Dedicate VM Shared Across VMs, Dedicate VM Scale Scale up, scale out Scale up, scale out As you can see, there are many difference between the cloud service and web site, but the main point is that, the cloud service focus on those complex architecture web application. For example, if you want to build a website with frontend layer, middle business layer and data access layer, with some background worker process connected through the message queue, then you should better use cloud service, since it provides full control of your code and application. But if you just want to build a personal blog or a  business portal, then you can use the web site. Since the web site have many galleries, you can create them even without any coding and configuration. David Pallmann have an awesome figure explains the benefits between the could service, web site and virtual machine.   Create a Personal Blog in Web Site from Gallery As I mentioned above, one of the big feature in WAWS is to build a website from an existing gallery, which means we don’t need to coding and configure. What we need to do is open the windows azure developer portal and click the NEW button, select WEB SITE and FROM GALLERY. In the popping up windows there are many websites we can choose to use. For example, for personal blog there are Orchard CMS, WordPress; for CMS there are DotNetNuke, Drupal 7, mojoPortal. Let’s select WordPress and click the next button. The next step is to configure the web site. We will need to specify the DNS name and select the subscription and region. Since the WordPress uses MySQL as its backend database, we also need to create a MySQL database as well. Windows Azure Web Sites utilize ClearDB to host the MySQL databases. You cannot create a MySQL database directly from SQL Databases section. Finally, since we selected to create a new MySQL database we need to specify the database name and region in the last step. Also we need to accept the ClearDB’s terms as well. Then windows azure platform will download the WordPress codes and deploy the MySQL database and website. Then it will be ready to use. Select the website and click the BROWSE button, the WordPress administration page will be shown. After configured the WordPress here is my personal web blog on the cloud. It took me no more than 10 minutes to establish without any coding.   Monitor, Configure, Scale and Linked Resources Let’s click into the website I had just created in the portal and have a look on what we can do. In the website details page where are five sections. - Dashboard The overall information about this website, such as the basic usage status, public URL, compute mode, FTP address, subscription and links that we can specify the deployment credentials, TFS and Git publish setting, etc.. - Monitor Some status information such as the CPU usage, memory usage etc., errors, etc.. We can add more metrics by clicking the ADD METRICS button and the bottom as well. - Configure Here we can set the configurations of our website such as the .NET and PHP runtime version, diagnostics settings, application settings and the IIS default documents. - Scale This is something interesting. In WAWS there are two compute mode or called web site mode. One is “shared”, which means our website will be shared with other web sites in a group of windows azure virtual machines. Each web site have its own process (w3wp.exe) with some sandbox technology to isolate from others. When we need to scaling-out our web site in shared mode, we actually increased the working process count. Hence in shared mode we cannot specify the virtual machine size since they are shared across all web sites. This is a little bit different than the scaling mode of the cloud service (hosted service web role and worker role). The other mode called “dedicate”, which means our web site will use the whole windows azure virtual machine. This is the same hosting behavior as cloud service web role. In web role it will be deployed on the virtual machines we specified and all of them are only used by us. In web sites dedicate mode, it’s the same. In this mode when we scaling-out our web site we will use more virtual machines, and each of them will only host our own website. And we can specify the virtual machine size in this mode. In the developer portal we can select which mode we are using from the scale section. In shared mode we can only specify the instance count, but in dedicate mode we can specify the instance size as well as the instance count. - Linked Resource The MySQL database created alone with the creation of our WordPress web site is a linked resource. We can add more linked resources in this section.   Pricing For the web site itself, since this feature is in preview period if you are using shared mode, then you will get free up to 10 web sites. But if you are using dedicate mode, the price would be the virtual machines you are using. For example, if you are using dedicate and configured two middle size virtual machines then you will pay $230.40 per month. If there is SQL Database linked to your web site then they will be charged separately based on the Pay-As-You-Go price. For example a 1GB web edition database costs $9.99 per month. And the bandwidth will be charged as well. For example 10GB outbound data transfer costs $1.20 per month. For more information about the pricing please have a look at the windows azure pricing page.   Summary Windows Azure Web Sites gives us easier and quicker way to create, develop and deploy website to window azure platform. Comparing with the cloud service web role, the WAWS have many out-of-box gallery we can use directly. So if you just want to build a blog, CMS or business portal you don’t need to learn ASP.NET, you don’t need to learn how to configure DotNetNuke, you don’t need to learn how to prepare PHP and MySQL. By using WAWS gallery you can establish a website within 10 minutes without any lines of code. But in some cases we do need to code by ourselves. We may need to tweak the layout of our pages, or we may have a traditional ASP.NET or PHP web application which needed to migrated to the cloud. Besides the gallery WAWS also provides many features to download, upload code. It also provides the feature to integrate with some version control services such as TFS and Git. And it also provides the deploy approaches through FTP and Web Deploy. In the next post I will demonstrate how to use WebMatrix to download and modify the website, and how to use TFS and Git to deploy automatically one our code changes committed.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Building gstreamer_ndk_bundle problems

    - by Cipi
    I'm trying to build gstreamer_ndk_bundle under Ubuntu 12.4 and I'm failing miserably! I have installed all "glib-dev" packages (packages that in their name have glib and dev), and also I have tried to compile/install glib 2.33.1 (latest) from source, but I always get this error: /home/marko/gstreamer_ndk_bundle/jni/../glib/gobject/gmarshal.c:149: undefined reference to `g_value_get_schar' collect2: ld returned 1 exit status make: *** [/home/marko/gstreamer_ndk_bundle/obj/local/armeabi/libgobject-2.0.so] Error 1 This means that glib source doesn't have the definition for g_value_get_schar, and since that function was introduced in glib somewhere after version 2.30.0, my guess is that I am not using proper glib! I tried to force gstremaer_ndk_bundle to build with sources from the folder /home/marko/glib-2.33.1/ which I compiled/installed by exporting these env vars: GLIB_GENMARSHAL=/home/marko/glib-2.33.1/gobject/glib-genmarshal GLIB_COMPILE_SCHEMAS=/home/marko/glib-2.33.1/gio/glib-compile-schemas Also I changed gmarshal.h so it includes gmarshal.h from the installed glib folder: #ifndef _marko_glib_loaded #define _marko_glib_loaded #include "/home/marko/glib-2.33.1/gobject/gmarshal.h" #endif But failed in both cases. How can I know what glib is used while compiling gstreamer and install the proper one? How can I force gstreamer_ndk_bundle to use glib sources from the folder I have un-tared/configured/installed and not the system ones, or whatever ones it uses? I read somewhere that I need gstreamer-devel package if I keep getting this error while compiling. Where can I find that package?! Can't Google it out... Has anyone EVER built gstreamer_ndk_bundle and lived to tell the tale?

    Read the article

  • Problems after bumblebee installation

    - by Samuel
    I tried to install bumblebee on Ubuntu 12.04 LTS by following steps on ubuntuwiki site. But when i used this code: sudo add-apt-repository ppa:bumblebee/stable && sudo apt-get update this output came out: Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.q0zzLiXVT3 --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 46C0364A882F14F899448FFCB22A95F88110A93A gpg: requesting key 8110A93A from hkp server keyserver.ubuntu.com gpg: key 8110A93A: "Launchpad PPA for Bumlebee Project" not changed gpg: Total number processed: 1 gpg: unchanged: 1 E: Type 'ain' is not known on line 3 in source list /etc/apt/sources.list.d/bumblebee-stable-precise.list E: The list of sources could not be read. There´s also the same problem message when I try to run the update center. ´E:Type´ain´ is not known on line 3 in source list /etc/apt/sources.list.d/bumblebee-stable-precise.list, E:The list of sources could not be read., E:The package lists or status file could not be parsed or opened.´ I don´t know what to do since I´m a newbie at Linux. Thanks in advance, Samuel.

    Read the article

  • Troubleshoot broken ZFS

    - by BBK
    I have one zpool called tank in RaidZ1 with 5x1TB SATA HDDs. I'm using Ubuntu Server 11.10 Oneric, kernel 3.0.0-15-server. Installed ZFS from ppa also I'm using zfs-auto-snapshot. The ZFS file system when zfs module loaded to the kernel hangs my computer. Before it I created few new file systems: zfs create -V 10G tank/iscsi1 zfs create -V 10G tank/iscsi2 zfs create -V 10G tank/iscsi3 I shared them through iSCSI by /dev/tank/iscsiX path. And my computer started to hanging sometimes when I used tank/iscsiX by iSCSI, do not know why exactly. I switched off iSCSI and started to remove this file systems: zfs destroy tank/iscsi3 I'm also using zfs-auto-snapshot so I had snapshots and without -r key my command not destroying the FS. So I issued next command: zfs destroy tank/iscsi3 -r The tank/iscsi3 FS was clean and contain nothing - it was destroyed without an issue. But tank/iscsi2 and tank/iscsi1 contained a lot of information. I tried zfs destroy tank/iscsi2 -r After some time my computer hang out. I rebooted computer. It didn't boot very fast, HDDs starts working like a crazy making a lot of noise, after 15 minutes HDDs stopped go crazy and OS booted at last. All seems to be ok - tank/iscsi2 was destroyed. After file systems at the tank was accessible, zpool status showed no corruption. I issued new command: zfs destroy tank/iscsi1 -r Situation was repeated - after some time my computer hang out. But this time ZFS seams not to healed itself. After computer switched on it started to work: loading scripts and kernel modules, after zfs starting to work it hanging my computer. I need to recover else ZFS file systems which lying in the same zpool. Few month ago I backup OS to flash drive. Booting from backed-up OS and import have the same results - OS starts hanging. How to recover my data at ZFS tank?

    Read the article

  • Couldn't make Angry birds to work on wine

    - by Ashfame
    I could run Notepad++ easily but I fail to run the Angry bird exe. Whenever I open the exe, I see one of my screen flickrs a bit (as lines and not the whole screen) and nothing happens. Any ideas? Edit: Output of wine angrybirds.exe fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC80.CRT" (8.0.50727.4053) fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC90.CRT" (9.0.21022.8) err:module:import_dll Library MSVCP90.dll (which is needed by L"C:\\windows\\system32\\AppUpWrapper.dll") not found err:module:import_dll Library AppUpWrapper.dll (which is needed by L"C:\\windows\\system32\\angrybirds.exe") not found err:module:LdrInitializeThunk Main exe initialization for L"C:\\windows\\system32\\angrybirds.exe" failed, status c0000135 I think it didn't even install. I manually dropped those files in the folder but still no gain. Edit: Progress I dropped the file MSVCP90.dll manually and now this is what I get in the output fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC80.CRT" (8.0.50727.4053) fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC90.CRT" (9.0.21022.8) fixme:heap:HeapSetInformation 0x541000 0 0x32fd48 4 fixme:heap:HeapSetInformation (nil) 1 (nil) 0 EXCEPTION: Failed to open data/scripts/starLimits.lua wine: Unhandled exception 0x40000015 at address 0x7b880023:0x78b271d0 (thread 0009), starting debugger... fixme:msvcr90:__clean_type_info_names_internal (0x10267694) stub fixme:msvcr90:__clean_type_info_names_internal (0x78506644) stub ashfame@ashfame-desktop:~$ Process of pid=0008 has terminated No process loaded, cannot execute 'echo Modules:' Cannot get info on module while no process is loaded No process loaded, cannot execute 'echo Threads:' process tid prio (all id:s are in hex) 0000000e services.exe 00000014 0 00000010 0 0000000f 0 00000011 winedevice.exe 00000018 0 00000016 0 00000013 0 00000012 0 00000019 explorer.exe 0000001a 0 You must be attached to a process to run this command. No process loaded, cannot execute 'detach' and there the terminal hangs (I mean I would have to Ctrl + C to get out). It shows up the famous message, that it needs to close down. Edit: Just to let you know that I am still stuck at it. I don't use wine for anything else, so I am ready to do a clean install of wine and everything if anyone is willing to provide me instructions.

    Read the article

  • Partner Spotlight

    - by rituchhibber
    FADATA   Fadata officially became a WebCenter Content Specialized partner in the Adriatic region upon the successful completion of the corresponding Oracle specialization tests. ''Being recognized by Oracle and customers will greatly help our company in maintaining a high level of implementation services related to Oracle Web Center Content, as one of strategic products we are focused on. This certification, that our team is very proud of, will certainly help our company FADATA to gain additional advantage, competitiveness, and integrity in implementing Oracle Web Center Content solutions, both on current and future projects in the region'' according to Velimir Corovic and Marjan Nikolic from Fadata. Please put link www.fadata.bg, under Fadata Please also include logo after 1st sentence FISHBOWL SOLUTIONS Google Search Appliance Connector for Oracle WebCenter Content The Google Search Appliance (GSA) provides fast search for your intranet or website. Fishbowl Solutions provides a connector for the Google Search Appliance to integrate it with the Oracle WebCenter Content Server while retaining the security benefits of Oracle WebCenter Content. For more information and real customer example click here Fairview Health Services Case Study or Webinar recording SIGNUM Signum TTE, from Turkey and a Gold member of Oracle® PartnerNetwork (OPN), recently announced it has achieved OPN Specialized status for Oracle WebCenter Portal. Signum TTE which began operations in the IT sector in 2005, is an innovative software solution house focusing on two main issues. Signum TTE presents services and solutions on Oracle Middleware products and technologies and its own product "WinDesk Service Management".

    Read the article

  • Debuild fails to make package for bluelog-1.04

    - by Dean Howell
    When trying to build a package for bluelog, Debuild give several errors. In the past, I've used checkinstall to quickly build crude packages. I am now trying to do it the right way and upload to a PPA. Bluelog can be found here: http://www.digifail.com/software/bluelog.shtml Here is the output from debuild; dpkg-buildpackage -rfakeroot -D -us -uc dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): -D_FORTIFY_SOURCE=2 dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions -Wl,-z,relro dpkg-buildpackage: source package bluelog dpkg-buildpackage: source version 1.0.4-0ubuntu1 dpkg-buildpackage: source changed by Dean Howell <dean@unknown> dpkg-source --before-build bluelog dpkg-buildpackage: host architecture amd64 fakeroot debian/rules clean dh clean dh_testdir dh_auto_clean make[1]: Entering directory `/home/dean/Launchpad Builds/bluelog/bluelog' rm -rf bluelog www/cgi-bin/* *.o *.txt *.log *.gz *.cgi make[1]: Leaving directory `/home/dean/Launchpad Builds/bluelog/bluelog' dh_clean dpkg-source -b bluelog dpkg-source: warning: Version number suggests Ubuntu changes, but Maintainer: does not have Ubuntu address dpkg-source: warning: Version number suggests Ubuntu changes, but there is no XSBC-Original-Maintainer field dpkg-source: info: using source format `3.0 (quilt)' dpkg-source: info: building bluelog using existing ./bluelog_1.0.4.orig.tar.gz dpkg-source: error: cannot represent change to bluelog/Builds/bluelog/bluelog/debian/bluelog/usr/bin/bluelog: binary file contents changed dpkg-source: error: add Builds/bluelog/bluelog/debian/bluelog/usr/bin/bluelog in debian/source/include-binaries if you want to store the modified binary in the debian tarball dpkg-source: error: unrepresentable changes to source dpkg-buildpackage: error: dpkg-source -b bluelog gave error exit status 2

    Read the article

  • Exalogic&ndash;The One Day Installation Challenge

    - by james.bayer
    It’s a really exciting time for the extended WebLogic community as we are enjoying seeing the impressive results of Exalogic deployments.  At Oracle Open World, a lot of people I spoke with came away impressed with the raw performance.  However, Exalogic offers a lot more than just raw performance.  I had the pleasure of working with Ram Sivaram during one of the Exalogic training sessions in Santa Clara.  In this video diary, he shows the Exalogic machine arrive on the shipping dock, get unpacked, wired up, powered on, configured, and installed with a WebLogic Server cluster in just about 10 hours.  I’ve worked with customers in the past that have taken several weeks or longer to get an environment ready after the hardware arrives.  This typically involves many different specialized teams in their organization.  Mohamad Afshar just wrote a great explanation of the benefit of Engineered Systems and contrasting that to the status quo.  Being able to streamline deployment of middleware capacity will have a lot of value for customers shortening time to deployment.  Thanks for the video Ram, you’ve set a high bar, we’ll see if anyone can top your time!  

    Read the article

  • sudo dhclient eth0 | sudo: unable to resolve host ubuntu

    - by Merianos Nikos
    I have a computer of a friend of mine, that runs Ubuntu (I don't know what version, due to the current system status) and while he was updating the kernel, he reboot the computer (yes that could be happen !!, anyway) Currently I am trying to recover the system by using a live USB, with Ubuntu installed on it. What I am doing, is the following: Update Failure The problem is that when I try to execute the fifth step, I am getting error because I do not have Internet access. The computer is properly wired on my rooter, and I have Internet access in any place apart of the shell. This message for example is send it via the live USB. but I cannot access the Internet via the shell. In my shell I try to use this command: sudo dhclient eth0 but the result of this command is the following message sudo: unable to resolve host ubuntu My hosts file has the following content: 127.0.0.1 localhost 127.0.1.1 ubuntu # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts how can I get connected on the Internet, in order to download the appropriate updates ? UPDATE 1 I just notice, that when I execute the ifconfig I am getting the following warning: Warning: cannot open /proc/net/dev (No such file or directory). Limited output. UPDATE 2 I just found that, and looks like solving the problem with dhclient eth0 command, but still I cannot ping Google UPDATE 3 Now the sudo dhclient eth0 returns the following message: RTNETLINK answers: File exists UPDATE 4 I just ping my rooter and I getting response, so, it is looks like I cannot ping outside the rooter (ie. Google) Kind regards ...

    Read the article

  • Could not calculate upgrade from Maverick Meerkat to Natty Narwhal

    - by xralf
    I upgraded from Ubuntu Lucid Lynx to Maverick Meerkat with the following commands: sudo apt-get update && sudo apt-get upgrade sudo apt-get install update-manager-core sudo vi /etc/update-manager/release-upgrades and changed the last line to Prompt=normal sudo do-release-upgrade -d This upgrade was OK. I decided to repeat the same steps and to upgrade Maverick Meerkat to Natty Narwhal. It ended with this message: Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: Can not mark 'xubuntu-desktop' for upgrade This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done === Command detached from window (Mon Nov 21 09:37:21 2011) === === Command terminated with exit status 1 (Mon Nov 21 09:37:21 2011) === How can I correct it?

    Read the article

  • Formalizing a requirements spec written in narrative English

    - by ProfK
    I have a fairly technical functionality requirements spec, expressed in English prose, produced by my project manager. It is structured as a collection of UI tabs, where the requirements for each tab are expressed as a lit of UI fields and a list of business rules for the tab. Most business rules are for UI fields on a tab, e.g: a) Must be alphanumeric, max length 20. b) Must be a dropdown, with values from table x. c) Is mandatory. d) Is mandatory under certain conditions, e.g. another field is just populated, or has a specific value. Then other business rules get a little more complex. The spec is for a job application, so the central business object (table) is the Applicant, and we have several other tables with one-to-many relationships with applicant, such as Degree, HighSchool, PreviousEmployer, Diploma, etc. e) One such complex rule says a status field can only be assigned a certain value if a many-side record exists in at least one of the many-side tables. E.g. the Applicant has at least one HighSchool or at least one Diploma record. I am looking for advice on how to codify these requirements into a more structured specification defined in terms of tables, fields, and relationships, especially for the conditional rules for fields and for the presence of related records. Any suggestions and advice will be most welcome, but I would be overjoyed if i could find an already defined system or structure for expressing things like this.

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >