Search Results

Search found 15798 results on 632 pages for 'authentication required'.

Page 469/632 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • Apache doesn't load .php files

    - by Haddex
    First, sorry for my English and asking something that it's quite answered all over the web. I've read a lot of post about this problem but I still can't find the solution. I'm a web developer who recently moved to Ubuntu from Windows 7. I had a website done (it's online and working) and I set up LAMP to keep working with it. I made a test.php file with: <?php phpinfo(); ?> and put it on /var/www/html directory, it shows all the information about the php and I was really happy: "Ok, it's all done, tomorrow I will work hard" But I placed my whole web into /var/www/html , not in a folder, the index.php is in /var/www/html but guess what: doesn't load any of my .php files, the browser just keep thinking. What I did: I rebooted Apache: /etc/init.d/apache2 restart I tried again with the test.php file and it works fine I put in /var/www/html a .html file and works fine. I looked for /etc/apache2/sites-enable/000-default.conf and it says: DocumentRoot /var/www/html I looked for /etc/apache2/mods-enabled/dir.conf and it says: DirectoryIndex index.html index.cgi index.pl index.php ... Edit* I think it's something related to phpmyadmin, like if I'm not able to connect with the database. But I got nothing on the screen when trying to load the page so...I'm not sure. I can access to the url localhost/phpmyadmin and I edited the connection.php file like this: <?php # FileName="Connection_php_mysql.htm" # Type="MYSQL" # HTTP="true" $hostname_rakstadconnection = "localhost"; $database_rakstadconnection = "rakstadclandb"; $username_rakstadconnection = "root"; $password_rakstadconnection = "admin"; $rakstadconnection = mysql_connect($hostname_rakstadconnection, $username_rakstadconnection, $password_rakstadconnection) or trigger_error(mysql_error(),E_USER_ERROR); mysql_query("SET NAMES 'utf8'"); ?> The name of the database is correct, like the user and password. http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112609_zpsc45ddb72.png http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112120_zps0b9e15f7.png *Edit2: could this be because it's a website that I brought to Linux from Windows? I used Dreamweaver. Edit3: I changed the # to /*/, nothing. The error.log file says: [Mon Jun 09 17:08:13.627881 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Warning: require_once(/var/www/html/Connections/rakstadconnection.php): failed to open stream: Permission denied in /var/www/html/index.php on line 1 [Mon Jun 09 17:08:13.627933 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Fatal error: require_once(): Failed opening required 'Connections/rakstadconnection.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/html/index.php on line 1 I'm reading error log but...should I add a linux path into a my index.php file? Don't think so. Thanks.

    Read the article

  • Partner Webcast – Oracle SOA Suite 12c: Connect 4 Cloud, Mobile, IoT with On-premise - August 28th 2014

    - by JuergenKress
    Thursday August 28th 2014 SOA Suite 12c Webcast The pace of new business projects continues to grow from increasing customer self-service to seamlessly connecting all your back office and in-the-field applications. At the same time increased integration complexity may seem inevitable as organizations are suddenly faced with the requirement to support three new integration challenges: » Cloud Integration - integrate with the cloud, rapidly integrate a growing list of cloud applications with existing applications » Mobile Integration - the urgency to mobile-enable existing applications » IoT Integration - begin development on the latest trend of connecting Internet of Things (IoT) devices to your existing infrastructure. Join this webcast to get an overview of what is in Java 8 from a business perspective and how with Java 8, you are uniquely positioned to extend innovation in your solutions through the largest, open, standards-based, community-driven platform. Oracle SOA Suite 12c Oracle SOA Suite 12c, the latest version of the industry’s most complete and unified application integration and SOA solution, aims to simplify, accelerate and optimize integrations. Oracle SOA Suite 12c and its associated products, Oracle Managed File Transfer, Oracle Cloud and Application Adapters, B2B and healthcare integration, offer the industry’s most highly integrated platform for solving the increased integration challenges. Oracle SOA Suite 12c is a complete, integrated and best-of-breed platform. It enables next generation integration capabilities through A unified toolset for the development of services and composite applications. A standards-based platform that is service enabled and easily consumable by modern web applications, allowing enterprises to quickly and easily adapt to changes in their business and IT environments. Greater visibility, controls and analytics to govern how services and processes are deployed, reused and changed across their entire lifecycle. Join us to find out more about the new features of Oracle SOA Suite 12c and how it enables you to reduce time to market for new project integration and to reduce integration cost and complexity. Oracle SOA Suite is the ability to simplify by integrating the disparate requirements of cloud, mobile, and IoT devices with existing on-premise applications. Agenda: Oracle SOA Suite 12c new Features Cloud Integration Mobile Enablement Interent of Things (IoT) Summary - Q&A For details please visit our registration page here. Thursday, Aug 28th 2014 10am CET  (9am GMT / 11am EEST SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: SOA Suite 12c,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress,SOA

    Read the article

  • WebLogic Server 11gR1 Interactive Quick Reference

    - by JuergenKress
    The WebLogic Server 11gR1 Administration interactive quick reference is a multimedia tool for various terms and concepts used in WebLogic Server architecture. This tool is available for administrators for online or offline use. This is built as a multimedia web page which provides descriptions of WebLogic Server Architectural components, and references to relevant documentation. This tool offers valuable reference information for any complex concept or product in an intuitive and useful manner. Each interactive type presents data that may be available in the documentation (in the case of Oracle products), but presents it in a way that is more intuitive and useful to a user of Oracle products because it displays data the way it is used in a real world, best practice scenario. For example, the architectural diagram interactive type provides an image of an architectural diagram that is typically larger than a single slide or paper. The image is scrollable and provides zoom capabilities to easily and clearly view any part of the image. The image itself contains a hotspot map that you can click to get more information about a feature, including reference links to the documentation in question. Linking the visual image of the component and where it fits in the overall architecture of the product, or technology in use, to the technical explanation and how-to materials related to that component is something not offered by the documentation. In a future release, the poster will also enable you to drill down even further into the individual subsystems in nested diagrams to look at the details of that subsystem. In short, the interactive posters are good at showing you the big picture, then quickly and easily getting you to the detailed information you need. In an instant, you can see where a technical component fits into an overall architecture, and zero in on the nitty-gritty details that show you how to do it yourself. Note: This is a first initial release with more features in development. Currently known information: Only Firefox 8.0 and higher is known to work with this product. This product may work with Chrome and Safari browsers, but is known to have issues in Internet Explorer at this time. Smartphones, such as iPads and iPhones, are partially supported WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: WebLogic server quick reference,weblogic overview,weblogic 12c,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • eSTEP Newsletter October 2012 now available

    - by uwes
    Dear Partners,We would like to inform you that the October '12 issue of our Newsletter is now available.The issue contains information to the following topics:News from CorpOracle Announces Oracle Solaris 11.1 at Oracle OpenWorld; Oracle Announces Oracle Exadata X3 Database In-Memory Machine; Oracle Enterprise Manager 12c introduces New Tools and Programs for Partners; Oracle Unveils First Industry-Specific Engineered System - the Oracle Networks Applications Platform,;  Oracle Unveils Expanded Oracle Cloud Offerings; Oracle Outlines Plans to Make the Future Java During JavaOne 2012 Strategy Keynote; Some interesting Java Facts and Figures; Oracle Announces MySQL 5.6 Release Candidate Technical Section What's up with LDoms (4 tech articles); Oracle SPARC T4 Systems cut Complexity, cost of Cryptographic Tasks; PeopleSoft Enterprise Financials 9.1; PeopleSoft HCM 9.1 combined online and batch benchmark,; Product Update Bulletin Oracle Solaris Cluster Oct 2012; Sun ZFS Storage 7420; SPARC Product Line Update; SPARC M-series -  New DAT 160 plus EOL of M3000 series; SPARC SuperCluster and SPARC T4 Servers Included in Enterprise Reference Architecture Sizing Tool; Oracle MagazineLearning & EventsRecently delivered Techcasts: An Update after the Oracle Open World, An Update on OVM Server for SPARC; Update to Oracle Database ApplianceReferencesBridgestone Aircraft Tire Reduces Required Disk Capacity by 50% with Virtualized Storage Solution; Fiat Group Automobiles Aligns Operational Decisions with Strategy by Using End-to-End Enterprise Performance Management System; Birkbeck, University of London Develops World-Class Computer Science Facilities While Reducing Costs with Ultrareliable and Scalable Data Infrastructure How toIntroducing Oracle System Assistant; How to Prepare a ZFS Storage Appliance to Serve as a Storage Device; Migrating Oracle Solaris 8 P2V with Oracle Database 10.2 and ASM; White paper on Best Practices for Building a Virtualized SPARC Computing Environment, How to extend the Oracle Solaris Studio IDE with NetBeans Plug-Ins; How I simplified Oracle Database 11g Installation on Oracle Linux 6You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • What do you do when practical problems get in the way of practical goals?

    - by P.Brian.Mackey
    UPDATE Source control is good to use. Sometimes, real world issues make it impractical to use. For example: If the team is not used to using source control, training problems can arise If a team member directly modifies code on the server, various issues can arise. Merge problems, lack of history, etc Let's say there's a project that is way out of sync. The physical files on the server differ in unknown ways over ~100 files. Merging would take not only a great knowledge of the project, but is also well beyond the ability to complete in the given time. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying source control saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming, across ~10 projects. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremly difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • SOA, Cloud & Service Technology Symposium 2012 London

    - by JuergenKress
    Registration Is Now Open With Special Pricing For Oracle Promotional Discount For Exclusive Oracle Discount, Enter Promo Code: Djmxz370 OVERVIEW The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. Keynotes and Speakers Thomas Erl Arcitura Education "SOA, Cloud Computing & Semantic Web Technology: The Sequel - The Era of Intelligent Service Technology" Markus Zirn Oracle "Big Data with CEP and SOA" Clemens Utschig Boehringer Ingelheim Pharma Manas Deb Oracle "The Successful Execution of the SOA and BPM Vision Tim E. Hall Oracle "Community Management: The Next Wave of SOA Governance and API Management" Registration is Now Open with Special Pricing for Oracle Promotional discount for exclusive Oracle discount, Enter Promo Code: DJMXZ370. SPONSORSHIP OPPORTUNITIES The Symposium provides an excellent opportunity to promote your organization in the lead-up to the event, to delegates during the Symposium, and after when the proceedings are made available on the Symposium web site. There are a limited number of premier sponsorship packages available, and a package can be tailored to your needs and budget. Download the Symposium Sponsorship Guide. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Symposium,SOA Cloud Service Technology Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Why is my dual-boot Ubuntu partition showing up as a peripheral "root.disk"?

    - by Don
    I recently installed Ubuntu 12.04, which I had been booting from a usb key, as a dual-boot on my machine running Windows 7. From what I had read online while researching, I was prepared to have to shrink the Windows partition and all that. But I never had to - it really was just a few clicks here and there and it was installed. I'm still pretty confused about it, but whatever, it worked, and the two peacefully coexist on my machine, and I have broken things to fix before I worry about fixing unbroken things. So yesterday I got it in my head to look at my partitions (I was considering making an all new partition to install the Windows 8 Release Preview). What I saw confused me. Here's a screenshot of the disk utility. At this moment, there is nothing connected to my computer, and nothing in any of the optical drives/ports/card readers/etc. Can you help me figure out what's going on here? Don's Machine is, I believe, my Windows partition - that's the name I assigned my machine from Windows Explorer. PQSERVICE is from what I can find online also Windows, but having to do with backup. And SYSTEM REQUIRED, if I browse it in Ubuntu, is definitely something to do with booting, and I believe it is also Windows'. According to the sizes shown, those three together should use up my 500 GB HD. Then further down, as a "peripheral device", it lists that 31 GB disk. This is obviously my Ubuntu (Model:Linux Loop:root.disk), but why is it showing up as a peripheral? So, to sum up those questions and to add some more random ones I had: Why is Ubuntu showing up as a peripheral device? If the Windows sections take up all 500 GB, where does Ubuntu live? If I renamed the disk partitions, would my life become a nightmare (seriously - can I safely rename them)? Why didn't I have to resize the Windows partition in the first place? Would giving Ubuntu more space improve its performance (it hangs alot)? Is it possible to have a partition for each OS (Windows 7 & 8, Ubuntu), a partition for files, and a separate partition for backups? Is this towards the good or bad idea end of the spectrum? @Elfy, would that explain why it keeps hanging? I guess I'll backup my files, rip it out, and reinstall it correctly later on today.

    Read the article

  • Convert VARCHAR() columns to NVARCHAR()

    - by ChrisD
    We recently underwent an upgrade that required us to change our database columns from varchar to NVarchar, to support unicode characters. Digging through the internet, I found a base script which I modified to handle reserved word table names, and maintain the NULL/NotNull constraint of the columns.   I Ran this script use NWOperationalContent – Your Catalog Name here GO SELECT 'ALTER TABLE ' + isnull(schema_name(syo.id), 'dbo') + '.[' +  syo.name +'] '     + ' ALTER COLUMN [' + syc.name + '] NVARCHAR(' + case syc.length when -1 then 'MAX'         ELSE convert(nvarchar(10),syc.length) end + ') '+         case  syc.isnullable when 1 then ' NULL' ELSE ' NOT NULL' END +';'    FROM sysobjects syo    JOIN syscolumns syc ON      syc.id = syo.id    JOIN systypes syt ON      syt.xtype = syc.xtype    WHERE      syt.name = 'varchar'     and syo.xtype='U'   which produced a series of ALTER statements which I could then execute the tables.  In some cases I had to drop indexes, alter the tables, and re-create the indexes.  There might have been a better way to do that, but manually dropping them got the job done.   use NWMerchandisingContent GO ALTER TABLE Locale Drop Constraint PK_Locale ALTER TABLE Country DROP CONSTRAINT PK_Country GO ALTER TABLE dbo.[Campaign]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [UnitOfmeasure] NVARCHAR(200)  NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Imperative] NVARCHAR(MAX)  NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Instructions] NVARCHAR(MAX)  NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleComponent]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Bundle]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Banner]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Video]  ALTER COLUMN [Link] NVARCHAR(512)  NOT NULL; ALTER TABLE dbo.[Video]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[ProductUsage]  ALTER COLUMN [VideoLink] NVARCHAR(512)  NOT NULL; ALTER TABLE dbo.[ProductUsage]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Thumbnail]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [UnitOfMeasure] NVARCHAR(150)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [SwatchColor] NVARCHAR(50)  NOT NULL; etc.. GO ALTER TABLE Locale ADD CONSTRAINT PK_Locale PRIMARY KEY (LocaleId) ALTER TABLE Country ADD CONSTRAINT PK_Country PRIMARY KEY (CountryId) Note that this alter is non-destructive to the data.   Hope this helps.

    Read the article

  • Proper Usage of Arrays and Functions [closed]

    - by Ssegawa Victor
    Can some one help me write a C code that solves the following problem. PROBLEM Consider the faculty registrar who has to process results for 1st year 1st semester students. Students offer five courses CSC 1100, CSK 1101, CSC 1104, CSC 1105 and CSC 1106. The courses have credit units 4,4,4,3 and 3 respectively. Lecturers provide course work and exam marks. For each course, course work constitutes 40% of the final mark while the exam constitutes 60% of the final mark. The role of the registrar is to Compute the final mark for each student for each course. The final mark must be a whole number Compute the grade and grade point of the students for each course they offered. According to senate regulations, grades and grade points are awarded to final marks according to the following criteria Range Grade Grade Point 90 – 100 A+ 5.0 80 – 89 A 5.0 75 – 79 B+ 4.5 70 – 74 B 4.0 65 – 69 C+ 3.5 60 – 64 C 3.0 55 – 59 D+ 2.5 50 – 54 D 2.0 45 – 49 E 1.5 40 – 44 E- 1.0 0 – 39 F 0.0 Put a comment ‘Retake’ to a student for every course where the Grade Point is less than 2.0 Compute the cumulative grade point average CGPA for each student. The senate formula for CGPA is GGPA =(?_(i=1)^(i=N)¦?CU _i×GP _i ?)/(?_(i=1)^(i=N)¦CU i) Put a comment “Progress” for any student whose GGPA is greater than 2 and “Stay Put” on a student whose CGPA is less than 2 You are required to create a c program that considers a class of 25 students and: 1.Initializes an array ‘student’ which stores student names 2.Initializes arrays for course work and exam for each course. ‘cw_csc_1100’ and ‘ex_csc_1100’ store course work and exam marks (respectively) for CSC 1100. The same approach is considered for all other courses 3.Initializes the coursework and exam marks arrays with marks between 0 and 99 4.Write appropriate functions that will generate the final marks, generate grades, generate grade points, generate cumulative grade points, generate comments for students and comments for courses per student 5.Create appropriate arrays for final marks and insert the data there using the appropriate functions 6.Without having to create any extra arrays, use the functions created to generate a report per student that looks like the one bellow. Student Name: Ngubiri Course Unit Final mark Grade Grade Point Course Comment CSC 1100 43 E- 1.0 Retake CSK 1101 50 D 2.0 CSC 1104 59 D+ 2.5 CSC 1105 70 B 4.0 CSC 1106 65 C+ 3.5 CGPA 2.47 Overall Comment Progress NB It is advisable that the indices are used to identify the owners. Eg if student[x] is John, then cs_csc_100[x] should be a mark for John since the index is the same

    Read the article

  • TechEd 2012: Windows 8 And Metro

    - by Tim Murphy
    Windows 8 is here (or at least very close) and that was the main feature of this morning’s key note.  Antoine LeBlond started off by apologizing to the IT professionals since he planned on showing code.  I’m not sure if IT Pros are that easily confused or why you would need such a disclaimer.  Developers do real work, IT Pros just play with toys (just kidding). The highlights of the Windows 8 keynote for me started with some of the UI design elements that I had not seen when I was shown one of the Build tablets.  Specifically I liked the AppBar features that we have become used to with Windows Phone and some of the gesture features.  Even though they have been available on other platforms before I think Microsoft really got them right. Two other great features of Windows 8 that they demonstrated were the Hyper-V capabilities and the ability to run Windows 8 anywhere from a USB key.  My jaw dropped through the floor seeing a feature rich OS boot off of a thumb drive. WOW!  I also can’t wait to get rid of dual booting just to run Hyper-V images when developing. The morning continued with a session on Metro XAML development with Tim Heuer.  While included a lot of great XAML Metro demos, I was pleasantly surprised by some of the things I found out about Visual Studio 2012.  Finding out that Blend is now integrated with VS2012 was a nice addition after working with them as separate applications was an encouraging start. Moving on to Metro he introduced the nugget that WinRT is Async everywhere.  How deep this model goes will be an interesting thing to find out as I learn more about developing for the platform.  Thankfully he followed that up with a couple of new keywords, await and async, that eliminates a lot of plumbing that has been required in the past for asynchronous transactions. Tim also related that since the Metro framework is relatively small and most apps will use a significant amount of it the entire surface is referenced by default.  This is a contrast to adding namespace and assemblies one after another as we normally do. This was such a power packed session that I can’t detail it all here so here is the teaser list. New icons in VS2012 for extension methods Emulator/simulator testing features for gestures Portable class libraries XAML no longer managed code And so much more …   del.icio.us Tags: Windows 8,Metro,Tim Heuer,XAML,Widows Phone,Hyper-V,Antoine LeBlond,TechEd,TechEd 2012,Visual Studio 2012,Visual Studio

    Read the article

  • Scenarios for Throwing Exceptions

    - by Joe Mayo
    I recently came across a situation where someone had an opinion that differed from mine of when an exception should be thrown. This particular case was an issue opened on LINQ to Twitter for an Exception on EndSession.  The premise of the issue was that the poster didn’t feel an exception should be raised, regardless of authentication status.  As first, this sounded like a valid point.  However, I went back to review my code and decided not to make any changes. Here's my rationale: 1. The exception doesn’t occur if the user is authenticated when EndAccountSession is called. 2. The exception does occur if the user is not authenticated when EndAccountSession is called. 3. The exception represents the fact that EndAccountSession is not able to fulfill its intended purpose - to end the session.  If a session never existed, then it would not be possible to perform the requested action.  Therefore, an exception is appropriate. To help illustrate how to handle this situation, I've modified the following code in Program.cs in the LinqToTwitterDemo project to illustrate the situation: static void EndSession(ITwitterAuthorizer auth) { using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) { try { //Log twitterCtx.Log = Console.Out; var status = twitterCtx.EndAccountSession(); Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } catch (TwitterQueryException tqe) { var webEx = tqe.InnerException as WebException; if (webEx != null) { var webResp = webEx.Response as HttpWebResponse; if (webResp != null && webResp.StatusCode == HttpStatusCode.Unauthorized) Console.WriteLine("Twitter didn't recognize you as having been logged in. Therefore, your request to end session is illogical.\n"); } var status = tqe.Response; Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } } } As expected, LINQ to Twitter wraps the exception in a TwitterQueryException as the InnerException.  The TwitterQueryException serves a very useful purpose through it's Response property.  Notice in the example above that the response has Request and Error proprieties.  These properties correspond to the information that Twitter returns as part of it's response payload.  This is often useful while debugging to help you understand why Twitter was unable to perform the  requested action.  Other times, it's cryptic, but that's another story.  At least you have some way of knowing in your code how to anticipate and handle these situations, along with having extra information to debug with. To sum things up, there are two points to make: when and why an exception should be raised and when to wrap and re-throw an exception in a custom exception type. I felt it was necessary to allow the exception to be raised because the called method was unable to perform the task it was designed for.  I also felt that it is inappropriate for a general library to do anything with exceptions because that could potentially hide a problem from the caller.  A related point is that it should be the exclusive decision of the application that uses the library on what to do with an exception.  Another aspect of this situation is that I wrapped the exception in a custom exception and re-threw.  This is a tough call because I don’t want to hide any stack trace information.  However, the need to make the exception more meaningful by including vital information returned from Twitter swayed me in the direction to design an interface that was as helpful as possible to library consumers.  As shown in the code above, you can dig into the exception and pull out a lot of good information, such as the fact that the underlying HTTP response was a 401 Unauthorized.  In all, trade-offs are seldom perfect for all cases, but combining the fact that the method was unable to perform its intended function, this is a library, and the extra information can be more helpful, it seemed to be the better design. @JoeMayo

    Read the article

  • Wesquare (NL) helps major CG customer integrating Oracle Service Cloud (RightNow) with JDEdwards

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE When this well known, Italy based, CG player claimed that they needed a new CRM tool, Oracle partner WeSquare had a precise idea of what would be required, knowing that the customer was using JDEdwards as an ERP: they immediately thoughts about a solution that would help synchronizing the customer’s back-end system with the new CRM interface. The customer asked for presentations from three companies, including Oracle, and eventually selected Oracle Service Cloud (RightNow) with Alfa Sistemi (Oracle Platinum Partner) as a System Integrator supported by Wesquare (Oracle Gold partner specialized in RightNow). Synchronizing an On Premises ERP with a new SaaS based CRM platform could be seen as an uphill task, but WeSquare was determined, during the presales cycle, to prove that they had the skills and the attitude to make the difference. So, they rolled up their sleeves and got to it: five days of relentless work, missed lunches, and hours of brainstorming showed its result in the form of a new interface that works fabulously well with the JDEdwards ERP back-end and was successfully pitched by Oracle to the end-customer to win the deal! WeSquare took the occasion to learn that they can integrate Oracle Service Cloud (RightNow) with practically every other solution that a customer may run. As part of the project, WeSquare was also involved in different add-on’s development with the aim of enriching Oracle Service Cloud’s functionality. WeSquare is based in The Netherlands with an in-shore practice supported by off-shore teams in India. WeSquare can integrate and synchronize any application with RightNow. For more information, visit www.wesquare.nl or contact Wiebe Blankenberg (Managing Director) at +31 (0) 6 3632 1104 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Enabling Google Webmaster Tools With Your GWB Blog

    - by ToStringTheory
    I’ll be honest and save you some time, if you don’t have your own domain for your GWB blog, this won’t help, you may just want to move on…  I don’t want to waste your time……… Still here?  Good.  How great are Google’s website tools?  I don’t just mean Analytics which rocks, but also their Webmaster Tools (https://www.google.com/webmasters/tools/) which gives you a glimpse into the queries that provide you your website traffic, search engine behavior on your site, and important keywords, just to name a few.   Pictured Above: Cool statistics. Problem Thanks to svickn over at wtfnext.com (another GeeksWithBlogs blog), we already have the knowledge on how to setup Google Analytics (wtfnext.com - How to: Set up Google Analytics on your GeeksWithBlogs blog).  However, one of the questions raised in the post, and even semi-answered in the questions, was how to setup Google Webmaster Tools with your blog as well. At first glance, it seems like it can’t be done.  Google graciously gives you several different options on how to authorize that you own a site.  The authentication options are: 1. (Recommended) – Upload an HTML file to your server 2. Add a meta tag to your site’s home page 3. Use your Google Analytics account 4. Add a DNS record to your domain’s configuration Since you don’t have access to the base path, you can’t do #1.  Same goes for #2 since you can’t edit the master/index page.  As for #3, they REQUIRE the Analytics code to be in the <head> section of your page, so even though we can use the workaround of hosting it in the news section, it won’t allow it since it isn’t in the correct place. Solution Last I checked, I didn’t see the DNS record option for Webmaster Tools.  Maybe this was recently added, or maybe I don’t remember it since I was always able to use some other method to authorize it.  In this case though, this is the option that we need.  My registrar wasn’t in their list, but they provide detailed enough instructions for the ‘Other’ option: Simply create a TXT record with your domain hoster (mine is DynDns), fill in the tag information, and then click verify.  My entry was able to be resolved immediately, but since you are working with DNS, it may take longer.  If after 24 hours you still aren’t able to verify, you can use a site such as mxtoolbox.com, and in the searchbox type “txt: {domain-name-here}”, to see if your TXT record was entered successfully. It is pretty simple to setup the TXT entry in DynDns, but if you have questions/comments, feel free to post them. Conclusion With this simple workaround (not really a workaround, but feature since they offer it..), you are now able to see loads of information regarding your standings in the world of the Google Search Engine.  No critical issues?  Did I do something wrong?! As an aside, you can do the same thing with the Bing Webmaster Tools by adding a CNAME record to bing.verify.com…  Instructions can be found on the ‘Add Site’ popup when adding your site. If you don’t have your own domain, but continued, to read to this point – thank you!

    Read the article

  • Data Model Dissonance

    - by Tony Davis
    So often at the start of the development of database applications, there is a premature rush to the keyboard. Unless, before we get there, we’ve mapped out and agreed the three data models, the Conceptual, the Logical and the Physical, then the inevitable refactoring will dog development work. It pays to get the data models sorted out up-front, however ‘agile’ you profess to be. The hardest model to get right, the most misunderstood, and the one most neglected by the various modeling tools, is the conceptual data model, and yet it is critical to all that follows. The conceptual model distils what the business understands about itself, and the way it operates. It represents the business rules that govern the required data, its constraints and its properties. The conceptual model uses the terminology of the business and defines the most important entities and their inter-relationships. Don’t assume that the organization’s understanding of these business rules is consistent or accurate. Too often, one department has a subtly different understanding of what an entity means and what it stores, from another. If our conceptual data model fails to resolve such inconsistencies, it will reduce data quality. If we don’t collect and measure the raw data in a consistent way across the whole business, how can we hope to perform meaningful aggregation? The conceptual data model has more to do with business than technology, and as such, developers often regard it as a worthy but rather arcane ceremony like saluting the flag or only eating fish on Friday. However, the consequences of getting it wrong have a direct and painful impact on many aspects of the project. If you adopt a silo-based (a.k.a. Domain driven) approach to development), you are still likely to suffer by starting with an incomplete knowledge of the domain. Even when you have surmounted these problems so that the data entities accurately reflect the business domain that the application represents, there are likely to be dire consequences from abandoning the goal of a shared, enterprise-wide understanding of the business. In reading this, you may recall experiences of the consequence of getting the conceptual data model wrong. I believe that Phil Factor, for example, witnessed the abandonment of a multi-million dollar banking project due to an inadequate conceptual analysis of how the bank defined a ‘customer’. We’d love to hear of any examples you know of development projects poleaxed by errors in the conceptual data model. Cheers, Tony

    Read the article

  • Using HBase or Cassandra for a token server

    - by crippy
    I've been trying to figure out how to use HBase/Cassandra for a token system we're re-implementing. I can probably squeeze quite a lot more from MySQL, but it just seems it has come to clinging on to the wrong tool for the task just because we know it well. Eventually will hit a wall (like happened to us in other areas). Naturally I started looking into possible NoSQL solutions. The prominent ones (at least in terms of buzz) are HBase and Cassandra. The story is more or less like this: A user can send a gift other users. Each gift has a list of recipients or is public in which case limited by number or expiration date For each gift sent we generate some token that uniquely identifies that gift. For each gift we track the list of potential recipients and their current status relating to that gift (accepted, declinded etc). A user can request to see all his currently pending gifts A can request a list of users he has sent a gift to today (used to limit number of gifts sent) Required the ability to "dump" or "ignore" expired gifts (x day old gifts are considered expired) There are some other requirements but I believe the above covers the essentials. How would I go and model that using HBase or Cassandra? Well, the wall was performance. A few 10s of millions of records per day over 2 tables kept for 2 weeks (wish I could have kept it for more but there was no way). The response times kept getting slower and slower until eventually we had to start cutting down number of days we kept data. Caching helps here but it's not an ideal solution since a big part of the ops are updates. Also, as I hinted in my original post. We use MySQL extensively. We know exactly what it can and can't do both in naive implementations followed by native partitioning and finally by horizontally sharding our dataset on the application level to reside on multiple DB nodes. It can be done, but that's not really what I'm trying to get from this. I asked a very specific question about designing a solution using a NoSQL solution since it's very hard to find examples for designs out there. Brainlag, not trying to come off as rude. I actually appreciate it a lot that you are the only one who even bothered to respond. but I see it over and over again. People ask questions and others assume they have no idea what they're talking about and give an irrelevant answer. Ignore RDBMS please. The question is about nosql.

    Read the article

  • Oracle JDeveloper 11gR2 Cookbook book review

    - by Chris Muir
    I recently received a free copy of Oracle JDeveloper 11gR2 Cookbook published by Packt Publishing for review. Readers of technical cookbooks would know this genre of text includes problems that developers will hit and the prescribed solutions, in this case for Oracle's Application Development Framework (ADF).  Books like this excel themselves on excellent coverage, a logical progress of solutions through out the book, and providing a readable narrative around the numerous steps and code. This book progresses well through ADF application assembly, ADF Business Components, the view layer, security, deployment and tuning.  Each recipe had a clear introduction and I especially enjoyed the "There's more" follow up sections for some recipes that leads the reader onto related ideas and issues the reader really needs to be aware of. Also worthy of comment having worked with ADF for over 5 years, there certainly was recipes and solutions I hadn't encountered before, this book gets bonus points for that. As a reviewer what negatives can I give this text? The book has cast it's net too wide by trying to cover "everything from design and construction, to deployment, testing, debugging and optimization."  ADF is such a large and sophistication technology, this book with 100 recipes barely scrapes the surface.  Don't expect all your ADF problems to be solved here. In turn there is inconsistency in the level of problems and solutions.  I felt at the beginning the book was pitching itself at advanced problems to solve (that's great for me), but then it introduces topics like building a static View Object or train.  These topics in my opinion are fairly simple and are covered by the Oracle documentation just as well, they shouldn't have been included here.  In conclusion, ADF beginners will find this book worthwhile as it will open your eyes to the wider problems and solutions required for ADF, and experts for just the fact they can point junior programmers at the book for certain problems and say "get on with it". Is there scope for more ADF tombs like this?  Yes!  I'd love to see a cookbook specializing on ADF Business Components (hint hint to budding authors).

    Read the article

  • What is a good strategy for binding view objects to model objects in C++?

    - by B.J.
    Imagine I have a rich data model that is represented by a hierarchy of objects. I also have a view hierarchy with views that can extract required data from model objects and display the data (and allow the user to manipulate the data). Actually, there could be multiple view hierarchies that can represent and manipulate the model (e.g. an overview-detail view and a direct manipulation view). My current approach for this is for the controller layer to store a reference to the underlying model object in the View object. The view object can then get the current data from the model for display, and can send the model object messages to update the data. View objects are effectively observers of the model objects and the model objects broadcast notifications when properties change. This approach allows all the views to update simultaneously when any view changes the model. Implemented carefully, this all works. However, it does require a lot of work to ensure that no view or model objects hold any stale references to model objects. The user can delete model objects or sub-hierarchies of the model at any time. Ensuring that all the view objects that hold references to the model objects that have been deleted is time-consuming and difficult. It feels like the approach I have been taking is not especially clean; while I don't want to have to have explicit code in the controller layer for mediating the communication between the views and the model, it seems like there must be a better (implicit) approach for establishing bindings between the view and the model and between related model objects. In particular, I am looking for an approach (in C++) that understands two key points: There is a many to one relationship between view and model objects If the underlying model object is destroyed, all the dependent view objects must be cleaned up so that no stale references exist While shared_ptr and weak_ptr can be used to manage the lifetimes of the underlying model objects and allows for weak references from the view to the model, they don't provide for notification of the destruction of the underlying object (they do in the sense that the use of a stale weak_ptr allows for notification), but I need an approach that notifies the dependent objects that their weak reference is going away. Can anyone suggest a good strategy to manage this?

    Read the article

  • Memory read/write access efficiency

    - by wolfPack88
    I've heard conflicting information from different sources, and I'm not really sure which one to believe. As such, I'll post what I understand and ask for corrections. Let's say I want to use a 2D matrix. There are three ways that I can do this (at least that I know of). 1: int i; char **matrix; matrix = malloc(50 * sizeof(char *)); for(i = 0; i < 50; i++) matrix[i] = malloc(50); 2: int i; int rowSize = 50; int pointerSize = 50 * sizeof(char *); int dataSize = 50 * 50; char **matrix; matrix = malloc(dataSize + pointerSize); char *pData = matrix + pointerSize - rowSize; for(i = 0; i < 50; i++) { pData += rowSize; matrix[i] = pData; } 3: //instead of accessing matrix[i][j] here, we would access matrix[i * 50 + j] char *matrix = malloc(50 * 50); In terms of memory usage, my understanding is that 3 is the most efficient, 2 is next, and 1 is least efficient, for the reasons below: 3: There is only one pointer and one allocation, and therefore, minimal overhead. 2: Once again, there is only one allocation, but there are now 51 pointers. This means there is 50 * sizeof(char *) more overhead. 1: There are 51 allocations and 51 pointers, causing the most overhead of all options. In terms of performance, once again my understanding is that 3 is the most efficient, 2 is next, and 1 is least efficient. Reasons being: 3: Only one memory access is needed. We will have to do a multiplication and an addition as opposed to two additions (as in the case of a pointer to a pointer), but memory access is slow enough that this doesn't matter. 2: We need two memory accesses; once to get a char *, and then to the appropriate char. Only two additions are performed here (once to get to the correct char * pointer from the original memory location, and once to get to the correct char variable from wherever the char * points to), so multiplication (which is slower than addition) is not required. However, on modern CPUs, multiplication is faster than memory access, so this point is moot. 1: Same issues as 2, but now the memory isn't contiguous. This causes cache misses and extra page table lookups, making it the least efficient of the lot. First and foremost: Is this correct? Second: Is there an option 4 that I am missing that would be even more efficient?

    Read the article

  • Are there any concerns with using a static read-only unit of work so that it behaves like a cache?

    - by Rowan Freeman
    Related question: How do I cache data that rarely changes? I'm making an ASP.NET MVC4 application. On every request the security details about the user will need to be checked with the area/controller/action that they are accessing to see if they are allowed to view it. The security information is stored in the database. For example: User Permission UserPermission Action ActionPermission A "Permission" is a token that is applied to an MVC action to indicate that the token is required in order to access the action. Once a user is given the permission (via the UserPermission table) then they have the token and can therefore access the action. I've been looking in to how to cache this data (since it rarely changes) so that I'm only querying in-memory data and not hitting a database (which is a considerable performance hit at the moment). I've tried storing things in lists, using a caching provider but I either run in to problems or performance doesn't improve. One problem that I constantly run in to is that I'm using lazy loading and dynamic proxies with EntityFramework. This means that even if I ToList() everything and store them somewhere static, the relationships are never populated. For example, User.Permissions is an ICollection but it's always null. I don't want to Include() everything because I'm trying to keep things simple and generic (and easy to modify). One thing I know is that an EntityFramework DbContext is a unit of work that acts with 1st-level caching. That is, for the duration of the unit of work, everything that is accessed is cached in memory. I want to create a read-only DbContext that will exist indefinitely and will only be used to read about permission data. Upon testing this it worked perfectly; my page load times went from 200ms+ to 20ms. I can easily force the data to refresh at certain intervals or simply leave it to refresh when the application pool is recycled. Basically it will behave like a cache. Note that the rest of the application will interact with other contexts that exist per request as normal. Is there any disadvantage to this approach? Could I be doing something different?

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Creating and using Distribution Groups

    - by The Old Toxophilist
    By default running your Exalogic in a Virtual provides you with, what to Cloud Users, is a single large resource and they can just create vServers and not care about how they are laid down on the the underlying infrastructure. All the Cloud Users will know is that they can create vServers. For example if we have a Quarter Rack (8 Nodes) and our Cloud User creates 8 vServers those 8 vServers may run on 8 distinct nodes or may all run on the same node. Although in many cases we, as Cloud Users, may not be to worried how the Virtualisation Algorithm decides where to place our vServers there are cases where it is extremely important that vServers run on distinct physical compute nodes. For example if we have a Weblogic Cluster we will want the Servers with in the cluster to run on distinct physical node to cover for the situation where one physical node is lost. To achieve this the Exalogic Virtualised implementation provides Distribution Groups that define and anti-aliasing policy that the underlying Virtualisation Algorithm will take into account when placing vServers. It should be noted that Distribution Groups must be created before you create vServers because a vServer can only be added to a Distribution Group at creation time. Creating A Distribution Group To create a Distribution Groups we will first need to select the Account in which we want the Distribution Group to be created. Once we have selected the account we will see the Interface update and Account specific Actions will be displayed within the Action Panes. From the Action pane (or Right-Click on the Account) select the "Create Distribution Group" action. This will initiate the create wizard as follows. Distribution Group Details Within the first Step of the Wizard we can specify the name of the distribution group and this should be unique. In addition we can provide a detailed description of the group. Distribution Group Configuration The second step of the configuration wizard allows you to specify the number of elements that are required within this group and will specify a maximum of the number of nodes within you Exalogic. At this point it is always better to specify a group with spare capacity allowing for future expansion. As vServers are added to group the available slots decrease. Summary Finally the last step of the wizard display a summary of the information entered.

    Read the article

  • Getting Started Plugging into the "Find in Projects" Dialog

    - by Geertjan
    In case you missed it amidst all the code in yesterday's blog entry, the "Find in Projects" dialog is now pluggable. I think that's really cool. The code yesterday gives you a complete example, but let's break it down a bit and deconstruct down to a very simple hello world scenario. We'll end up with as many extra tabs in the "Find in Projects" dialog as we need, for example, three in this case:  And clicking on any of those extra tabs will, in this simple example, simply show us this: Once we have that, we'll be able to continue adding small bits of code over the next few blog entries until we have something more useful. So, in this blog entry, you'll literally be able to display "Hello World" within a new tab in the "Find in Projects" dialog: import javax.swing.JComponent; import javax.swing.JLabel; import org.netbeans.spi.search.provider.SearchComposition; import org.netbeans.spi.search.provider.SearchProvider; import org.netbeans.spi.search.provider.SearchProvider.Presenter; import org.openide.NotificationLineSupport; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = SearchProvider.class) public class ExampleSearchProvider1 extends SearchProvider { @Override public Presenter createPresenter(boolean replaceMode) { return new ExampleSearchPresenter(this); } @Override public boolean isReplaceSupported() { return false; } @Override public boolean isEnabled() { return true; } @Override public String getTitle() { return "Demo Extension 1"; } public class ExampleSearchPresenter extends SearchProvider.Presenter { private ExampleSearchPresenter(ExampleSearchProvider1 sp) { super(sp, true); } @Override public JComponent getForm() { return new JLabel("Hello World"); } @Override public SearchComposition composeSearch() { return null; } @Override public boolean isUsable(NotificationLineSupport nls) { return true; } } } That's it, not much code, works fine in NetBeans IDE 7.2 Beta, and is easier to digest than the big chunk from yesterday. If you make three classes like the above in a NetBeans module, and you install it, you'll have three new tabs in the "Find in Projects" dialog. The only required dependencies are Dialogs API, Lookup API, and Search in Projects API. Read the javadoc linked above and then in next blog entries we'll continue to build out something like the sample you saw in yesterday's blog entry.

    Read the article

  • Support ARMv7 instruction set in Windows Embedded Compact applications

    - by Valter Minute
    On of the most interesting new features of Windows Embedded Compact 7 is support for the ARMv5, ARMv6 and ARMv7 instruction sets instead of the ARMv4 “generic” support provided by the previous releases. This means that code build for Windows Embedded Compact 7 can leverage features (like the FPU unit for ARMv6 and v7) and instructions of the recent ARM cores and improve their performances. Those improvements are noticeable in graphics, floating point calculation and data processing. The ARMv7 instruction set is supported by the latest Cortex-A8, A9 and A15 processor families. Those processor are currently used in tablets, smartphones, in-car navigation systems and provide a great amount of processing power and a low amount of electric power making them very interesting for portable device but also for any kind of device that requires a rich user interface, processing power, connectivity and has to keep its power consumption low. The bad news is that the compiler provided with Visual Studio 2008 does not provide support for ARMv7, building native applications using just the ARMv4 instruction set. Porting a Visual Studio “Smart Device” native C/C++ project to Platform Builder is not easy and you’ll lack many of the features that the VS2008 application development environment provides. You’ll also need access to the BSP and OSDesign configuration for your device to be able to build and debug your application inside Platform Builder and this may prevent independent software vendors from using the new compiler to improve their applications performances. Adeneo Embedded now provides a whitepaper and a Visual Studio plug-in that allows usage of the new ARMv7 enabled compiler to build applications inside Visual Studio 2008. I worked on the whitepaper and the tools, with the help of my colleagues and now the results can be downloaded from Adeneo Embedded’s website: http://www.adeneo-embedded.com/OS-Technologies/Windows-Embedded (Click on the “WEC7 ARMv7 Whitepaper tab to access the download links, free registration required) A very basic benchmark showed a very good performance improvement in integer and floating-point operations. Obviously your mileage may vary and we can’t promise the same amount of improvement on any application, but with a small effort on your side (even smaller if you use the plug-in) you can try on your own application. ARMv7 support is provided using Platform Builder’s compiler and VS2008 application debugger is not able to debut ARMv7 code, so you may need to put in place some workaround like keeping ARMv4 code for debugging etc.

    Read the article

  • Reading a ZFS USB drive with Mac OS X Mountain Lion

    - by Karim Berrah
    The problem: I'm using a MacBook, mainly with Solaris 11, but something with Mac OS X (ML). The only missing thing is that Mac OS X can't read my external ZFS based USB drive, where I store all my data. So, I decided to look for a solution. Possible solution: I decided to use VirtualBox with a Solaris 11 VM as a passthrough to my data. Here are the required steps: Installing a Solaris 11 VM Install VirtualBox on your Mac OS X, add the extension pack (needed for USB) Plug your ZFS based USB drive on your Mac, ignore it when asked to initialize it. Create a VM for Solaris (bridged network), and before installing it, create a USB filter (in the settings of your Vbox VM, go to Ports, then USB, then add a new USB filter from the attached device "grey usb-connector logo with green plus sign")  Install a Solaris 11 VM, boot it, and install the Guest addition check with "ifconfg -a" the IP address of your Solaris VM Creating a path to your ZFS USB drive In MacOS X, use the "Disk Utility" to unmount the USB attached drive, and unplug the USB device. Switch back to VirtualBox, select the top of the window where your Solaris 11 is running plug your ZFS USB drive, select "ignore" if Mac OS invite you to initialize the disk In the VirtualBox VM menu, go to "Devices" then "USB Devices" and select from the dropping menu your "USB device" Connection your Solaris VM to the USB drive Inside Solaris, you might now check that your device is accessible by using the "format" cli command If not, repeat previous steps Now, with root privilege, force a zpool import -f myusbdevicepoolname because this pool was created on another system check that you see your new pool with "zpool status" share your pool with NFS: share -F NFS /myusbdevicepoolname Accessing the USB ZFS drive from Mac OS X This is the easiest step: access an NFS share from mac OS Create a "ZFSdrive" folder on your MacOS desktop from a terminal under mac OS: mount -t nfs IPadressofMySoalrisVM:/myusbdevicepoolname  /Users/yourusername/Desktop/ZFSdrive et voila ! you might access your data, on a ZFS USB drive, directly from your Mountain Lion Desktop. You might play with the share rights in order to alter any read/write rights as needed. You might activate compression, encryption inside the Solaris 11 VM ...

    Read the article

  • Drawing 2D Grid in 3D View - Need help with method

    - by Deukalion
    I'm trying to draw a simple 2D grid for an editor, to able to navigate more clearly around the 3D space, but I can't render it: Grid2D class, creates a grid of a certain size at a location and should just draw lines. public class Grid2D : IShape { private VertexPositionColor[] _vertices; private Vector2 _size; private Vector3 _location; private int _faces; public Grid2D(Vector2 size, Vector3 location, Color color) { float x = 0, y = 0; if (size.X < 1f) { size.X = 1f; } if (size.Y < 1f) { size.Y = 1f; } _size = size; _location = location; List<VertexPositionColor> vertices = new List<VertexPositionColor>(); _faces = 0; for (y = -size.Y; y <= size.Y; y++) { vertices.Add(new VertexPositionColor(location + new Vector3(-size.X, y, 0), color)); vertices.Add(new VertexPositionColor(location + new Vector3(size.X, y, 0), color)); _faces++; } for (x = -size.X; x <= size.X; x++) { vertices.Add(new VertexPositionColor(location + new Vector3(x, -size.Y, 0), color)); vertices.Add(new VertexPositionColor(location + new Vector3(x, size.Y, 0), color)); _faces++; } _vertices = vertices.ToArray(); } public void Render(GraphicsDevice device) { device.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.LineList, _vertices, 0, _faces); } } Like this: +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ Anyone knows what I'm doing wrong? If I add a Shape without texture, it's set automatically to VertexColorEnabled and TextureEnabled = false. This is how I render it: foreach (RenderObject render in _renderObjects) { render.Effect.Projection = projection; render.Effect.View = view; render.Effect.World = world; foreach (EffectPass pass in render.Effect.CurrentTechnique.Passes) { pass.Apply(); try { // Could be a Grid2D render.Shape.Render(_device); } catch { throw; } } } Exception is thrown: The current vertex shader declaration does not include all the elements required by the current Vertex Shader. Normal0 is missing. Simply put, I can't figure out how to draw a few lines. I want to draw them one at a time and I guess that's the problem I haven't figured out, and even when I tried rendering vertices[i], vertices[i+1] and primitiveCount = 1, vertices = 2, and so on it didn't work either. Any suggestions?

    Read the article

  • Resources to Help You Getting Up To Speed on ADF Mobile

    - by Joe Huang
    Hi, everyone: By now, I hope you would have a chance to review the sample applications and try to deploy it.  This is a great way to get started on learning ADF Mobile.  To help you getting started, here is a central list of "steps to get up to speed on ADF Mobile" and related resources that can help you developing your mobile application. Check out the ADF Mobile Landing Page on the Oracle Technology Network. View this introductory video. Read this Data Sheet and FAQ on ADF Mobile. JDeveloper 11.1.2.3 Download. Download the generic version of JDeveloper for installation on Mac. Note that there are workarounds required to install JDeveloper on a Mac. Download ADF Mobile Extension from JDeveloper Update Center or Here. Please note you will need to configure JDeveloper for Internet access (In HTTP Proxy preferences) in order the install the extension, as the installation process will prompt you for a license that's linked off Oracle's web site. View this end-to-end application creation video. View this end-to-end iOS deployment video if you are developing for iOS devices. Configure your development environment, including location of the SDK, etc in JDeveloper-Tools-Preferences-ADF Mobile dialog box.  The two videos above should cover some of these configuration steps. Check out the sample applications shipped with JDeveloper, and then deploy them to simulator/devices using the steps outlined in the video above.  This blog entry outlines all sample applications shipped with JDeveloper. Develop a simple mobile application by following this tutorial. Try out the Oracle Open World 2012 Hands on Lab to get a sense of how to programmatically access server data.  You will need these source files. Ask questions in the ADF/JDeveloper Forum. Search ADF Mobile Preview Forum for entries from ADF Mobile Beta Testing participants. For all other questions, check out this exhaustive and detailed ADF Mobile Developer Guide. If something does not seem right, check out the ADF Mobile Release Note. Thanks, Oracle ADF Mobile Product Management Team

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >