Search Results

Search found 5300 results on 212 pages for 'my handy references'.

Page 94/212 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Northwind now available on SQL Azure

    - by jamiet
    Two weeks ago I made available a copy of [AdventureWorks2012] on SQL Azure and published credentials so that anyone from the SQL community could connect up and experience SQL Azure, probably for the first time. One of the (somewhat) popular requests thereafter was to make the venerable Northwind database available too so I am pleased to say that as of right now, Northwind is up there too. You will notice immediately that all of the Northwind tables (and the stored procedures and views too) have been moved into a schema called [Northwind] – this was so that they could be easily differentiated from the existing [AdventureWorks2012] objects. I used an SQL Server Data Tools (SSDT) project to publish the schema and data up to this SQL Azure database; if you are at all interested in poking around that SSDT project then I have made it available on Codeplex for your convenience under the MS-PL license – go and get it from https://northwindssdt.codeplex.com/. Using SSDT proved particularly useful as it alerted me to some aspects of Northwind that were not compatible with SQL Azure, namely that five of the tables did not have clustered indexes: The beauty of using SSDT is that I am alerted to these issues before I even attempt a connection to SQL Azure. Pretty cool, no? Fixing this situation was of course very easy, I simply changed the following primary keys from being nonclustered to clustered: [PK_Region] [PK_CustomerDemographics] [PK_EmployeeTerritories] [PK_Territories] [PK_CustomerCustomerDemo]   If you want to connect up then here are the credentials that you will need: Server mhknbn2kdz.database.windows.net Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly You will need SQL Server Management Studio (SSMS) 2008R2 installed in order to connect or alternatively simply use this handy website: https://mhknbn2kdz.database.windows.net which provides a web interface to a SQL Azure server. Do remember that hosting this database is not free so if you find that you are making use of it please help to keep it available by visiting Paypal and donating any amount at all to [email protected]. To make this easy you can simply hit this link and the details will be completed for you – all you have to do is login and hit the “Send” button. If you are already a PayPal member then it should take you all of about 20 seconds! I hope this is useful to some of you folks out there. Don’t forget that we also have more data up there than in the conventional [AdventureWorks2012], read more at Big AdventureWorks2012. @Jamiet  AdventureWorks on Azure - Provided by the SQL Server community, for the SQL Server community!

    Read the article

  • Using multiple diagrams per model in Entity Framework 5.0

    - by nikolaosk
    I have downloaded .Net framework 4.5 and Visual Studio 2012 since it was released to MSDN subscribers on the 15th of August.For people that do not know about that yet please have a look at Jason Zander's excellent blog post .Since then I have been investigating the many new features that have been introduced in this release.In this post I will be looking into theIn order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 20120 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine. I have also installed in my machine SQL Server 2012 developer edition. I have also downloaded and installed AdventureWorksLT2012 database.You can download this database from the codeplex website.   Before I start showcasing the demo I want to say that I strongly believe that Entity Framework is maturing really fast and now at version 5.0 can be used as your data access layer in all your .Net projects.I have posted extensively about Entity Framework in my blog.Please find all the EF related posts here. In this demo I will show you how to split an entity model into multiple diagrams using the new enhanced EF designer. We will not build an application in this demo.Sometimes our model can become too large to edit or view.In earlier versions we could only have one diagram per EDMX file.In EF 5.0 we can split the model into more diagrams.1) Launch VS 2012. Express edition will work fine.2) Create a New Project. From the available templates choose a Web Forms application  3) Add a new item in your project, an ADO.Net Entity Data Model. I have named it AdventureWorksLT.edmx.Then we will create the model from the database and click Next.Create a new connection by specifying the SQL Server instance and the database name and click OK.Then click Next in the wizard.In the next screen of the wizard select all the tables from the database and hit Finish.4) It will take a while for our .edmx diagram to be created. When I select an Entity (e.g Customer) from my diagram and right click on it,a new option appears "Move to new Diagram".Make sure you have the Model Browser window open.Have a look at the picture below 5) When we do that a new diagram is created and our new Entity is moved there.Have a look at the picture below  6) We can also right-click and include the related entities. Have a look at the picture below. 7) When we do that the related entities are copied to the new diagram.Have a look at the picture below  8) Now we can cut (CTRL+X) the entities from Diagram2 and paste them back to Diagram1.9) Finally another great enhancement of the EF 5.0 designer is that you can change colors in the various entities that make up the model.Select the entities you want to change color, then in the Properties window choose the color of your choice. Have a look at the picture below. To recap we have demonstrated how to split your entity model in multiple diagrams which comes handy in EF models that have a large number of entities in them Hope it helps!!!!

    Read the article

  • JavaOne: Parleys.com, Spring Vs. Java EE and HTML5 tooling

    - by delabassee
    Parleys.com, a 2012 Duke's Choice Award winner, is an E-Learning platform that host content from different sources (conferences, JUGs meetings, etc.). There is a lot of technical content available for online but also offline consumption, including many sessions on Java EE. Parleys has just released, for free, all the Devoxx 2011 sessions (video and slides sync'ed!). From a technical point of view, Parleys.com is interesting as they have switched from Spring to Java EE 6 to avoid being locked in a proprietary framework. During the GlassFish Community BoF, Stephan Janssen (Parleys.com and Devoxx founder) also presented how GlassFish is used to support 2000 concurrent Parleys users over a cluster of 2 GlassFish instances. Talking about Java EE and/or Spring, Harshad Oak has posted an update on the 'Spring Vs. Java EE' panel discussion that took place on Tuesday. As Arun said standards such as Java EE does not necessarily refrain innovation: "JBoss Forge & Arquillian from RedHat are great examples of innovation in the JavaEE community. Standardization is important but innovation does continue even within that framework." Simplicity, productivity along with HTML5 are the driving themes of Java EE 7. In terms of simplicity and productivity, the developer experience can also be improved by the tooling. Every NetBeans release comes with a large set of improvements, the just released NetBeans 7.3 beta is no exception. The goal of ‘NB 7.3’s Project Easel’ is to improve HTML5 development, something that will be handy for Java EE 7 developers. Project Easel can, for example, communicate directly to Chrome's WebKit engine, this feature was shown during Sunday's Technical Keynote at the end of the Java EE section. In this beta release, Chrome and the embedded JavaFX browser are the only supported browsers but the NetBeans team plan to add support, over time, for other WebKit based browsers. NetBans 7.3 beta NetBeans 7.3 screenscasts Today (i.e. Wednesday 3rd) is also the final exhibition day, so make sure to visit the Java EE and the GlassFish pods on the Java DEMOgrounds (Hilton Grand Ballroom, 9:30 am - 5:00 pm). Finally, here are some Java EE and GlassFish related activities worth attending today if you are at JavaOne : Wednesday October 3rd Time Title Location 8:30-9:30am What's New in Servlet 3.1: An Overview Parc 55 Mission 8:30-9:30am Bean Validation 1.1: What's New Under the Hood Parc 55Cyril Magnin II/III 10:00-11:00am JSR 353: Java API for JSON Processing Parc 55 Mission 10:00-12:00pm Tutorial : Integrating Your Service into the GlassFish PaaS Platform Parc 55 Devisidero 11:30-12:30pm What's New in JSF: A Complete Tour of JSF 2.2 Parc 55Cyril Magnin I 11:30-12:30pm Best of Both Worlds: Java Persistence with NoSQL and SQL Parc 55 Mission 1:00-2:00pm Sharding Middleware to Achieve Elasticity and High Availability in the Cloud Parc 55Market Street 1:00-2:00pm Pimp My RESTful Java Applications Parc 55Cyril Magnin I 3:00-4:00pm Migrating Spring to Java EE Parc 55Cyril Magnin II/III 4:30-5:30pm JavaEE.Next(): Java EE 7, 8, and Beyond Parc 55Cyril Magnin II/III 4:30-5:30pm HTML5 WebSocket and Java Parc 55Cyril Magnin I 4:30-5:30pm Easy Middleware for Your Embedded Device Nikko Ballroom II/III

    Read the article

  • Ubuntu 12.04.3 Graphics Issues: Broken Pipes, Reinstalled Xorg and Bumblebee

    - by user190488
    It seems I have a problem, and am only making it worse by following what I find online. I have a new Asus N550JV-D71 (not sure about the part after the dash, though I definitely know it includes 71). I decided to downgrade Windows 8 to 7, then dual boot Ubuntu 12.04 with it (there were issues with Windows 8, and I had a Windows 7 disk handy). It did work and, after installing Bumblebee in tty (because it wouldn't boot when it was first installed), it worked marvelously for a little less than a week. However, I restarted it last night and got the Could not write bytes: Broken pipes error. (I see it's a very common error, but I've looked at the majority of the suggested Similar Questions already.) I followed what I could find online, followed those instructions (making sure to not install any sort of graphics drivers other than what Bumblebee provides), and it just seems to go further and further downhill. I'm afraid I didn't write the exact steps to get to this point (it was late by the time I gave up the night before), but it involved reinstalling lightdm, xorg (and xserver?), and Bumblebee. I then changed the Bumblebee.conf file so that Device=nvidia. I'm pretty new to Linux in general (I've used it since 10.04, but I hadn't had issues up until this computer, so it let me stay a newbie), so I'm not exactly sure what log files to look at to find the errors to look up. However, I did look at lshw and noticed that displays was marked as unassigned. Also, if I try to start lightdm using the command line, it always stops at Stopping Mount network filesystems. I should note that there isn't an xorg.conf file, and no .Xauthority. I would really, really prefer not to reinstall 12.04 if possible. I managed to get grub to display only a short time ago, and I can't boot to the dvd drive unless I go into the BIOS settings and manually change the boot order (that was an issue from the beginning, before the Ubuntu install), and getting into those settings often means rebooting several times due to the fact that the window to get to it is extremely small. I have most of what I need backed up, however, in case it does get to that point. If I really have to, I can just use the latest Ubuntu version instead of the LTS, but the reason I chose 12.04 in the first place is because I need something stable-ish, and Windows isn't suitable to what I need to do. I should note that the reason I restarted last night in the first place was that it wasn't charging the battery, and the wifi kept on going out. Hardware: Nvidia GeForce GT 750M Intel HD graphics 4600

    Read the article

  • 10 Tips for Partners - Oracle PartnerNetwork Exchange Planning

    - by Get_Specialized!
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} With the Oracle PartnerNetwork Exchange @ Oracle OpenWorld 2012 just around the corner, here are 10 tips to aid partners in planning and preparation. Before you arrive, select the Oracle PartnerNetwork Exchange Sessions you and your team will attend Signup for the Test Fest and the exams you and your team can take while attending Review the Subject Area Focus on Documents to help you zero in on the Oracle OpenWorld sessions to attend Use the handy floor plans to get familiar with what is where in the exposition hall this year before you arrive Sunday just after lunch at 1pm, attend the PartnerNetwork Exchange keynote , Moscone North, Hall D, followed by the session track kickoffs Sunday night , 7:30pm – 10:30 pm , checkout the OPN AfterDark Reception where you can meet and network with contacts from around the world On Sunday and Monday, be sure check in with the Social Media Rally Coordinator for maximum social media expertise and exposure On Monday through Wednesday, meet with Oracle Partner representatives at the Oracle PartnerNetwork Lounge Moscone South, Exhibit Hall, Room 100 Take and share your PartnerNetwork pictures during the week with OPN on Instagram Be prepared to share with roving OPN team member reporters, how you are leveraging your OPN Specializations to provide innovative solutions and services for the Cloud. You never know – it could aid in getting you exposure as a possible speaker for next year’s event.

    Read the article

  • Some More New ADF Features in JDeveloper 11.1.2

    - by Steven Davelaar
    The official list of new features in JDeveloper 11.1.2 is documented here. While playing with JDeveloper 11.1.2 and scanning the web user interface developer's guide for 11.1.2, I noticed some additional new features in ADF Faces, small but might come in handy:  You can use the af:formatString and af:formatNamed constructs in EL expressions to use substituation variables. For example: <af:outputText value="#{af:formatString('The current user is: {0}',someBean.currentUser)}"/> See section 3.5.2 in web user interface guide for more info. A new ADF Faces Client Behavior tag: af:checkUncommittedDataBehavior. See section 20.3 in web user interface guide for more info. For this tag to work, you also need to set the  uncommittedDataWarning  property on the af:document tag. And this property has quite some issues as you can read here. I did a quick test, the alert is shown for a button that is on the same page, however, if you have a menu in a shell page with dynamic regions, then clicking on another menu item does not raise the alert if you have pending changes in the currently displayed region. For now, the JHeadstart implementation of pending changes still seems the best choice (will blog about that soon). New properties on the af:document tag: smallIconSource creates a so-called favicon that is displayed in front of the URL in the browser address bar. The largeIconSource property specifies the icon used by a mobile device when bookmarking the page to the home page. See section 9.2.5 in web user interface guide for more info. Also notice the failedConnectionText property which I didn't know but was already available in JDeveloper 11.1.1.4. The af:showDetail tag has a new property handleDisclosure which you can set to client for faster rendering. In JDeveloper 11.1.1.x, an expression like #{bindings.JobId.inputValue} would return the internal list index number when JobId was a list binding. To get the actual JobId attribute value, you needed to use #{bindings.JobId.attributeValue}. In JDeveloper 11.1.2 this is no longer needed, the #{bindings.JobId.inputValue} expression will return the attribute value corresponding with the selected index in the choice list. Did you discover other "hidden" new features? Please add them as comment to this blog post so everybody can benefit. 

    Read the article

  • Another Custom Property Locator: a Library of Books

    - by Cindy McMullen
    Introduction The previous post gave an introduction to custom property locators and showed how create one using JDeveloper.  This post continues on the custom locator theme, with a slightly more complex locator: a library of books.  It demonstrates using the DAO pattern to delegate data access from the Locator, which is likely how many actual backing stores will integrate with the Locator.  You can imagine, rather than a library of books, the data store might be a user database of sorts.  The same sort of pattern would apply. This post uses the BookLocator example originally shown in the WebCenter documentation, but has: updated the source code to reflect the final Property APIs includes the steps for generating the namespace and property definition files via JDeveloper detailed usage of the PropertyService APIs Getting Started If you're new to JDeveloper, you might want to check out this tutorial.  There is also the "Jump-Start to using Personalization" blog post that you might find useful.  Otherwise, if you're already familiar with both, you can skip those tutorials and jump right in to using JDeveloper. Download the BookLocator.zip file (which has been updated from the original post) and unzip it to a new directory.  Start JDeveloper, navigate to the BookLocator.jws file, and open it.   It should look something like this: The Properties Namespace file contains the property definitions and property set definitions you define.  It is explained more in detail in the Namespace documentation.  Although this example doesn't show it, the property set definitions have the ability to reference multiple locators per property.   This can be done by right-clicking on the 'Locator Info' box.  Configure the contents of the Locator Map  by editing locators and mapping them to available property names in the property set definition. Compiling, deploying, and running your locator The rest of the steps in this tutorial basically follow those in the previous blog on custom locators, and won't be repeated here.   A scenario to invoke your locator is included with the sample app: see BookProperties.scenarios_diagram above.  Summary This post demonstrates a simple library of books accessed by the BookPropertyLocator via the DAO layer.  This is a useful pattern for more realistic property retrievals, such as a backing user store.  It also points out the possibility of retrieving properties from multiple locators, which would be quite handy to retrieve user attributes from multiple sources.

    Read the article

  • Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5

    - by Shub Lahiri, A-Team
    Background SOA Suite for Healthcare Integration pack comes with a pre-installed simulator that can be used as an external endpoint to generate inbound and outbound HL7 traffic on specified MLLP ports. This is a command-line utility that can be very handy when trying to build a complete end-to-end demo within a standalone, closed environment. The ant-based utility accepts the name of a configuration file as the command-line input argument. The format of this configuration file has changed between PS4 and PS5. In PS4, the configuration file was XML based and in PS5, it is name-value property based. The rest of this note highlights these differences and provides samples that can be used to run the first scenario from the product samples set. PS4 - Configuration File The sample configuration file for PS4 is shown below. The configuration file contains information about the following items: Directory for incoming and outgoing files for the host running SOA Suite Healthcare Polling Interval for the directory External Endpoint Logical Names External Endpoint Server Host Name and Ports Message throughput to be simulated for generating outbound messages Documents to be handled by different endpoints A copy of this file can be downloaded from here. PS5 - Configuration File The corresponding sample configuration file for PS5 is shown below. The configuration file contains similar information about the sample scenario but is not in XML format. It has name-value pairs specified in the form of a properties file. This sample file can be downloaded from here. Simulator Configuration Before running the simulator, the environment has to be set by defining the proper ANT_HOME and JAVA_HOME. The following extract is taken from a working sample shell script to set the environment: Also, as a part of setting the environment, template jndi.properties and logging.properties can be generated by using the following ant command: ant -f ant-b2bsimulator-util.xml b2bsimulator-prop Sample jndi.properties and logging.properties are shown below and can be modified, as needed. The jndi.properties contains information about connectivity to the local Weblogic Managed Server instance and the logging.properties file controls the amount of logging that can be generated from the running simulator process. Simulator Usage - Start and Stop The command syntax to launch the simulator via ant is the same in PS4 and PS5. Only the appropriate configuration file has to be supplied as the command-line argument, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstart -Dargs="simulator1.hl7-config.xml" This will start the simulator and will keep running to provide an active external endpoint for SOA Healthcare Integration engine. To stop the simulator, a similar ant command can be used, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstop

    Read the article

  • Oops, I left my kernel zone configuration behind!

    - by mgerdts
    Most people use boot environments to move in one direction.  A system starts with an initial installation and from time to time new boot environments are created - typically as a result of pkg update - and then the new BE is booted.  This post is of little interest to those people as no hackery is needed.  This post is about some mild hackery. During development, I commonly test different scenarios across multiple boot environments.  Many times, those tests aren't related to the act of configuring or installing zone and I so it's kinda handy to avoid the effort involved of zone configuration and installation.  A somewhat common order of operations is like the following: # beadm create -e golden -a test1 # reboot Once the system is running in the test1 BE, I install a kernel zone. # zonecfg -z a178 create -t SYSsolaris-kz # zoneadm -z a178 install Time passes, and I do all kinds of stuff to the test1 boot environment and want to test other scenarios in a clean boot environment.  So then I create a new one from my golden BE and reboot into it. # beadm create -e golden -a test2 # reboot Since the test2 BE was created from the golden BE, it doesn't have the configuration for the kernel zone that I configured and installed.  Getting that zone over to the test2 BE is pretty easy.  My test1 BE is really known as s11fixes-2. root@vzl-212:~# beadm mount s11fixes-2 /mnt root@vzl-212:~# zonecfg -R /mnt -z a178 export | zonecfg -z a178 -f - root@vzl-212:~# beadm unmount s11fixes-2 root@vzl-212:~# zoneadm -z a178 attach root@vzl-212:~# zoneadm -z a178 boot On the face of it, it would seem as though it would have been easier to just use zonecfg -z a178 create -t SYSolaris-kz within the test2 BE to get the new configuration over.  That would almost work, but it would have left behind the encryption key required for access to host data and any suspend image.  See solaris-kz(5) for more info on host data.  I very commonly have more complex configurations that contain many storage URIs and non-default resource controls.  Retyping them would be rather tedious.

    Read the article

  • Quickly Copy Movie Files to Individually Named Folders

    - by DigitalGeekery
    Some HTPC media manager applications require movie files to be in stored in separate folders to properly store information such as cover art images and other metadata. Here we look at copying movie files to individual folders. If you already have a large movie collection stored in a single folder, we’ll show you how to quickly move those files into their own individually named folders. File2Folder FIle2folder is a handy portable app that automatically creates and moves movie files into a folder of the same filename. There is no installation needed. Simply download and run the .exe file (link below). Enter the current movie directory, or browse for the folder. File2folder now supports both local and network shares. When you are ready to create the folders and move the files, click Move! You’ll see the move progress displayed in the window. When the process is finished, you’ll have all your movie file in individual folders.   Change your mind? Just click the Undo! button…   …and the move and folder creation process will be undone. If you would like to have the folder monitored for new files, click the Start button. File2folder will process any new files it discovers every 180 seconds. To turn it off, click Stop. This simple little program is a huge timesaver for those looking to organize movie collections for their HTPC. We should also note that this will work with any files, not just videos. Download file2folder Similar Articles Productive Geek Tips Hack: Turn Off Debug Mode in VMWare Workstation 6 BetaAdd Images and Metadata to Windows 7 Media Center Movie LibraryAdd Folders to the Movie Library in Windows 7 Media CenterAutomatically Mount and View ISO files in Windows 7 Media CenterMove the Public Folder in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician

    Read the article

  • My Doors - Why Standards Matter to Business

    - by [email protected]
    By Brian Dayton on April 8, 2010 9:27 PM "Standards save money." "Standards accelerate projects." "Standards make better solutions." What do these statements mean to you? You buy technology solutions like Oracle Applications but you're a business person--trying to close the quarter, get performance reviews processed, negotiate a new sourcing contract, etc. When "standards" come up in presentations and discussions do you: - Nod your head politely - Tune out and check your smart phone - Turn to your IT counterpart and say "Bob's all over this standards thing, right Bob?" Here's why standards matter. My wife wants new external doors downstairs, ones that would get more light into the rooms. Am I OK with that? "Uhh, sure...it's a little dark in the kitchen." - 24 hours ago - wife calls to tell me that she's going to the hardware store and may look at doors - 20 hours ago - wife pulls into driveway, informs me that two doors are in the back of her station wagon, ready for me to carry - 19 hours ago - I re-discovered the fact that it's not fun to carry a solid wood door by myself - 5 hours ago - Local handyman, who was at our house anyway, tells me that the doors we bought will likely cost 2-3x the material cost in installation time and labor...the doors are standard but our doorways aren't We could have done more research. I could be more handy. Sure. But the fact is, my 1951 house wasn't built with me in mind. They built what worked and called it a day. The same holds true with a lot of business applications. They were designed and architected for one-time use with one use-case in mind. Today's business climate is different. If you're going to use your processes and technology to differentiate your business you should have at least a working knowledge of: - How standards can benefit your business - Your IT organization's philosophy around standards - Your vendor's track-record around standards...and watch for those who pay lip-service to standards but don't follow through The rallying cry in most IT organizations today is "learn more about the business, drop the acronyms." I'm not advocating that you go out and learn how to code in Java. But I do believe it will help your business and your decision-making process if you meet IT ½...even ¼ of the way there. Epilogue: The door project has been put on hold and yours truly has to return the doors to the hardware store tomorrow.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • Collision rectangle response

    - by dotty
    I'm having difficulties getting a moveable rectangle to collide with more than one rectangle. I'm using SFML and it has a handy function called Intersect() which takes 2 rectangles and returns the intersections. I have a vector full of rectangles which I want my moveable rectangle to collide with. I'm looping through this using the following code (p is the moveble rectangle). IsCollidingWith returns a bool but also uses SFML's Interesect to work out the intersections. while(unsigned i = 0; i!= testRects.size(); i++){ if(p.IsCollidingWith(testRects[i]){ p.Collide(testRects[i]); } } and the actual Collide() code void gameObj::collide( gameObj collidingObject ){ printf("%f %f\n", this->colliderResult.width, this->colliderResult.height); if (this->colliderResult.width < this->colliderResult.height) { // collided on X if (this->getCollider().left < collidingObject.getCollider().left ) { this->move( -this->colliderResult.width , 0); }else { this->move( this->colliderResult.width, 0 ); } } if(this->colliderResult.width > this->colliderResult.height){ if (this->getCollider().top < collidingObject.getCollider().top ) { this->move( 0, -this->colliderResult.height); }else { this->move( 0, this->colliderResult.height ); } } } and the IsCollidingWith() code is bool gameObj::isCollidingWith( gameObj testObject ){ if (this->getCollider().intersects( testObject.getCollider(), this->colliderResult )) { return true; }else { return false; } } This works fine when there's only 1 Rect in the scene. However, when there's move than one Rect it causes issue when working out 2 collisions at once. Any idea how to deal with this correctly? I have uploaded a video to youtube to show my problem. The console on the far-right shows the width and height of the intersections. You can see on the console that it's trying to calculate 2 collisions at once, I think this is where the problem is being caused. The youtube video is at http://www.youtube.com/watch?v=fA2gflOMcAk also , this image also seems to illustrate the problem nicely. Can someone please help, I've been stuck on this all weekend!

    Read the article

  • LINQ – SequenceEqual() method

    - by nmarun
    I have been looking at LINQ extension methods and have blogged about what I learned from them in my blog space. Next in line is the SequenceEqual() method. Here’s the description about this method: “Determines whether two sequences are equal by comparing the elements by using the default equality comparer for their type.” Let’s play with some code: 1: int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; 2: // int[] numbersCopy = numbers; 3: int[] numbersCopy = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; 4:  5: Console.WriteLine(numbers.SequenceEqual(numbersCopy)); This gives an output of ‘True’ – basically compares each of the elements in the two arrays and returns true in this case. The result is same even if you uncomment line 2 and comment line 3 (I didn’t need to say that now did I?). So then what happens for custom types? For this, I created a Product class with the following definition: 1: class Product 2: { 3: public int ProductId { get; set; } 4: public string Name { get; set; } 5: public string Category { get; set; } 6: public DateTime MfgDate { get; set; } 7: public Status Status { get; set; } 8: } 9:  10: public enum Status 11: { 12: Active = 1, 13: InActive = 2, 14: OffShelf = 3, 15: } In my calling code, I’m just adding a few product items: 1: private static List<Product> GetProducts() 2: { 3: return new List<Product> 4: { 5: new Product 6: { 7: ProductId = 1, 8: Name = "Laptop", 9: Category = "Computer", 10: MfgDate = new DateTime(2003, 4, 3), 11: Status = Status.Active, 12: }, 13: new Product 14: { 15: ProductId = 2, 16: Name = "Compact Disc", 17: Category = "Water Sport", 18: MfgDate = new DateTime(2009, 12, 3), 19: Status = Status.InActive, 20: }, 21: new Product 22: { 23: ProductId = 3, 24: Name = "Floppy", 25: Category = "Computer", 26: MfgDate = new DateTime(1993, 3, 7), 27: Status = Status.OffShelf, 28: }, 29: }; 30: } Now for the actual check: 1: List<Product> products1 = GetProducts(); 2: List<Product> products2 = GetProducts(); 3:  4: Console.WriteLine(products1.SequenceEqual(products2)); This one returns ‘False’ and the reason is simple – this one checks for reference equality and the products in the both the lists get different ‘memory addresses’ (sounds like I’m talking in ‘C’). In order to modify this behavior and return a ‘True’ result, we need to modify the Product class as follows: 1: class Product : IEquatable<Product> 2: { 3: public int ProductId { get; set; } 4: public string Name { get; set; } 5: public string Category { get; set; } 6: public DateTime MfgDate { get; set; } 7: public Status Status { get; set; } 8:  9: public override bool Equals(object obj) 10: { 11: return Equals(obj as Product); 12: } 13:  14: public bool Equals(Product other) 15: { 16: //Check whether the compared object is null. 17: if (ReferenceEquals(other, null)) return false; 18:  19: //Check whether the compared object references the same data. 20: if (ReferenceEquals(this, other)) return true; 21:  22: //Check whether the products' properties are equal. 23: return ProductId.Equals(other.ProductId) 24: && Name.Equals(other.Name) 25: && Category.Equals(other.Category) 26: && MfgDate.Equals(other.MfgDate) 27: && Status.Equals(other.Status); 28: } 29:  30: // If Equals() returns true for a pair of objects 31: // then GetHashCode() must return the same value for these objects. 32: // read why in the following articles: 33: // http://geekswithblogs.net/akraus1/archive/2010/02/28/138234.aspx 34: // http://stackoverflow.com/questions/371328/why-is-it-important-to-override-gethashcode-when-equals-method-is-overriden-in-c 35: public override int GetHashCode() 36: { 37: //Get hash code for the ProductId field. 38: int hashProductId = ProductId.GetHashCode(); 39:  40: //Get hash code for the Name field if it is not null. 41: int hashName = Name == null ? 0 : Name.GetHashCode(); 42:  43: //Get hash code for the ProductId field. 44: int hashCategory = Category.GetHashCode(); 45:  46: //Get hash code for the ProductId field. 47: int hashMfgDate = MfgDate.GetHashCode(); 48:  49: //Get hash code for the ProductId field. 50: int hashStatus = Status.GetHashCode(); 51: //Calculate the hash code for the product. 52: return hashProductId ^ hashName ^ hashCategory & hashMfgDate & hashStatus; 53: } 54:  55: public static bool operator ==(Product a, Product b) 56: { 57: // Enable a == b for null references to return the right value 58: if (ReferenceEquals(a, b)) 59: { 60: return true; 61: } 62: // If one is null and the other not. Remember a==null will lead to Stackoverflow! 63: if (ReferenceEquals(a, null)) 64: { 65: return false; 66: } 67: return a.Equals((object)b); 68: } 69:  70: public static bool operator !=(Product a, Product b) 71: { 72: return !(a == b); 73: } 74: } Now THAT kinda looks overwhelming. But lets take one simple step at a time. Ok first thing you’ve noticed is that the class implements IEquatable<Product> interface – the key step towards achieving our goal. This interface provides us with an ‘Equals’ method to perform the test for equality with another Product object, in this case. This method is called in the following situations: when you do a ProductInstance.Equals(AnotherProductInstance) and when you perform actions like Contains<T>, IndexOf() or Remove() on your collection Coming to the Equals method defined line 14 onwards. The two ‘if’ blocks check for null and referential equality using the ReferenceEquals() method defined in the Object class. Line 23 is where I’m doing the actual check on the properties of the Product instances. This is what returns the ‘True’ for us when we run the application. I have also overridden the Object.Equals() method which calls the Equals() method of the interface. One thing to remember is that anytime you override the Equals() method, its’ a good practice to override the GetHashCode() method and overload the ‘==’ and the ‘!=’ operators. For detailed information on this, please read this and this. Since we’ve overloaded the operators as well, we get ‘True’ when we do actions like: 1: Console.WriteLine(products1.Contains(products2[0])); 2: Console.WriteLine(products1[0] == products2[0]); This completes the full circle on the SequenceEqual() method. See the code used in the article here.

    Read the article

  • Reset DRAC to factory defaults

    - by yakatz
    I put a DRAC that has not been used in a long time into a PowerEdge 1750 running CentOS 5.8. Because we run our DRACs on a separate network, we don't change the password from the default (root/calvin), but evidently someone changed this one. I tried the regular command line reset (racadm racresetcfg), but I get the following error: ERROR: Unable to perform requested operation. If the operation attempted was to configure DRAC, possible reason may be that Local Configuration using RACADM is disabled. This implies to me that racadm is able to communicate, but there is a setting on the DRAC that is preventing it from working. I was not able to find any references to this error in any Dell documentation. Has anyone seen this problem and/or know what I can do about it? (The DRAC is useless is I can't log in to it.)

    Read the article

  • Stretch VMWare Player guest OS to fullscreen

    - by Synetech
    I’m using VMWare Player to play an old 16-bit Windows game. Unfortunately the game uses only 640x480 and I cannot figure out how to stretch the VM window to full-screen on the host. I set the guest OS to 640x480, but the screen is still small, in the middle of the screen as seen in figure 1. I even tried setting the compatibility mode to Windows 95 and 640x480, but it has no effect (figure 2) and looks exactly the same as when I set the VM to full-screen (1366x768 on the laptop) and start the game normally. There are few references to stretching a VM. One page mentions setting a Stretch Guest option, but there is no such option, at least not in VMWare Player 4.0.3. I know that VirtualBox has a stretching option, but I’m trying to find a solution for VMWare (Player, not Workstation). Figure 1: Guest OS is pillar-boxed Figure 2: Using compatibility mode

    Read the article

  • The OLE DB provider "SQLNCLI10.1" has not been registered.; 42000.

    - by lankylad
    I have a SQL Server 2008 Analysis Services Project. In the Data Source View I have a Named Query which references a single Data Source containing three tables. The Project processes successfully and the cube can be browsed. I recently added a second Data Source to the Data Source View and linked a table to the original Named Query. When I try to process the project, I get the message: OLE DB error: OLE DB or ODBC error: The OLE DB provider "SQLNCLI10.1" has not been registered.; 42000. The Connection String for both Data Sources uses SQLNCLI10.1

    Read the article

  • PHP 5.3.1 Undefined Symbol: OnUpdateLong error on Apache Startup

    - by docgnome
    I'm running Ubuntu 8.04 on this server. I had PHP 5.2 installed via the package manager. I removed it to install PHP 5.3.1 by hand. I built the packages like so ./configure --prefix=/opt/php --with-mysql --with-curl=/usr/bin --with-apxs2=/usr/bin/apxs2 make make install This installed PHP 5.3.1 in /opt/php/ $ php -v PHP 5.3.1 (cli) (built: Dec 7 2009 10:51:14) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2009 Zend Technologies However, when I try to start Apache I get this. # /etc/init.d/apache2 restart * Restarting web server apache2 apache2: Syntax error on line 185 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/php5.load: Cannot load /usr/lib/apache2/modules/libphp5.so into server: /usr/lib/apache2/modules/libphp5.so: undefined symbol: OnUpdateLong [fail] Any ideas what's causing this error? All the references I can see have to do with building php5 packages for php4 or the like. PHP4 has never been installed on this machine.

    Read the article

  • The OLE DB provider "SQLNCLI10.1" has not been registered.; 42000

    - by lankylad
    I have a SQL Server 2008 Analysis Services Project. In the Data Source View I have a Named Query which references a single Data Source containing three tables. The Project processes successfully and the cube can be browsed. I recently added a second Data Source to the Data Source View and linked a table to the original Named Query. When I try to process the project, I get the message: OLE DB error: OLE DB or ODBC error: The OLE DB provider "SQLNCLI10.1" has not been registered.; 42000. The Connection String for both Data Sources uses SQLNCLI10.1

    Read the article

  • C# XmlSchemaSet - Resolving included schemas to enumerate attribute groups

    - by satixx
    Hi, I currently have a compiled XmlSchemaSet from which I get possible elements/attributes for each specific "parent" element defined in the schema. I have a single "master" xsd schema which includes another schema and uses attributeGroup references for some "common" elements. Here is a sample: (MasterSchema.xsd) <?xml version="1.0" encoding="utf-8"?> <xs:schema id="MasterSchema" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="CommonNamespace.xsd" xmlns="CommonNamespace.xsd" xmlns:mstns="CommonNamespace.xsd" elementFormDefault="qualified" > <xs:include schemaLocation="SourceAttributeGroups.xsd"/> <xs:element name="Binding" minOccurs="0" maxOccurs="2"> <xs:complexType> <xs:attribute name="Name" type="xs:string"/> <xs:attributeGroup ref="BindingSourceAttributeGroup"/> </xs:complexType> </xs:element> </xs:schema> (SourceAttributeGroups.xsd) <?xml version="1.0" encoding="utf-8"?> <xs:schema id="SourceAttributeGroups" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="CommonNamespace.xsd" xmlns="CommonNamespace.xsd" xmlns:mstns="CommonNamespace.xsd" elementFormDefault="qualified" > <xs:attributeGroup id="BindingSourceAttributeGroup" name="BindingSourceAttributeGroup"> <xs:attribute name="Source"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="Data"/> </xs:restriction> </xs:simpleType> </xs:attribute> <!-- When Source is None --> <xs:attribute name="Value" type="xs:string"/> <!-- Label --> <xs:attribute name="Label" type="xs:string"/> </attributeGroup> </xs:schema> I would like to create an XmlSchemaSet in C# which would resolve, compile and "merge" every references of the MasterSchema so it would finaly look like this: (MasterSchema.xsd) <?xml version="1.0" encoding="utf-8"?> <xs:schema id="MasterSchema" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="CommonNamespace.xsd" xmlns="CommonNamespace.xsd" xmlns:mstns="CommonNamespace.xsd" elementFormDefault="qualified" > <xs:include schemaLocation="SourceAttributeGroups.xsd"/> <xs:element name="Binding" minOccurs="0" maxOccurs="2"> <xs:complexType> <xs:attribute name="Name" type="xs:string"/> <xs:attribute name="Source"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="Data"/> </xs:restriction> </xs:simpleType> </xs:attribute> <!-- When Source is None --> <xs:attribute name="Value" type="xs:string"/> <!-- Label --> <xs:attribute name="Label" type="xs:string"/> </xs:complexType> </xs:element> </xs:schema> This way, I could traverse each XmlSchemaParticle of the compiled schema to get every single attribute for a specific element, even when its attributes are defined in an external schema. At the moment, when I get the possible attributes for a "Binding" element, I only get the "Name" attribute since it is originally defined in the "master" schema. What would be the possible solutions to this problem? Thanks! Satixx

    Read the article

  • Correct PHP5 DLL for Apache 2.2?

    - by Nathan Long
    I have installed Apache 2.2.14 (Win32) on a Windows XP machine and am trying to add the latest PHP module. I downloaded the ZIP file from here labeled "VC9 x86 Non Thread Safe" and extracted to my Apache directory. I then copied php5.dll to Apache's bin directory and copied php.ini to C:\Windows. In httpd.conf, I added these lines: LoadModule php5_module "C:/Program Files/Apache Software Foundation/Apache2.2/bin/php5.dll" AddType application/x-httpd-php .php Now Apache will not start. error.log says this: "Can't locate API module structure php5_module in file C:/Program Files/Apache Software Foundation/Apache2.2/bin/php5.dll": No error" I think I may have the wrong .dll file, because I found tutorials that use the filename php5apache2.dll and I didn't see that in the PHP package I got. Also, I have seen references to a file called php5ts.dll, but I don't see that either. What exactly do I need to make PHP5 work?

    Read the article

  • Setting Proxy Server for IE 10 on Windows 8 using pac file and Group Policy

    - by Greg Bray
    We currently use group policy to configure a proxy server PAC file for Windows XP and Windows 7 computers on our network. We now are starting to get requests for Windows 8, but have noticed that our current GPO does not work for setting the proxy server on Windows 8 clients or server 2012. Is it possible to do this using a 2008 R2 domain controller or would we need to update our domain to a 2012 server? I found a reference to creating new GPO settings for "Internet Explorer 10 and 11" and vague references to using RSAT on Windows 8 to set IE 10 settings via preferences, but nothing that talks about using group policy to manage proxy settings.

    Read the article

  • WCF security via message headers

    - by exalted
    I'm trying to implement "some sort of" server-client & zero-config security for some WCF service. The best (as well as easiest to me) solution that I found on www is the one described at http://www.dotnetjack.com/post/Automate-passing-valuable-information-in-WCF-headers.aspx (client-side) and http://www.dotnetjack.com/post/Processing-custom-WCF-header-values-at-server-side.aspx (corrisponding server-side). Below is my implementation for RequestAuth (descibed in the first link above): using System; using System.Diagnostics; using System.ServiceModel; using System.ServiceModel.Configuration; using System.ServiceModel.Dispatcher; using System.ServiceModel.Description; using System.ServiceModel.Channels; namespace AuthLibrary { /// <summary> /// Ref: http://www.dotnetjack.com/post/Automate-passing-valuable-information-in-WCF-headers.aspx /// </summary> public class RequestAuth : BehaviorExtensionElement, IClientMessageInspector, IEndpointBehavior { [DebuggerBrowsable(DebuggerBrowsableState.Never)] private string headerName = "AuthKey"; [DebuggerBrowsable(DebuggerBrowsableState.Never)] private string headerNamespace = "http://some.url"; public override Type BehaviorType { get { return typeof(RequestAuth); } } protected override object CreateBehavior() { return new RequestAuth(); } #region IClientMessageInspector Members // Keeping in mind that I am SENDING something to the server, // I only need to implement the BeforeSendRequest method public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState) { throw new NotImplementedException(); } public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel) { MessageHeader<string> header = new MessageHeader<string>(); header.Actor = "Anyone"; header.Content = "TopSecretKey"; //Creating an untyped header to add to the WCF context MessageHeader unTypedHeader = header.GetUntypedHeader(headerName, headerNamespace); //Add the header to the current request request.Headers.Add(unTypedHeader); return null; } #endregion #region IEndpointBehavior Members public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { throw new NotImplementedException(); } public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime) { clientRuntime.MessageInspectors.Add(this); } public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher) { throw new NotImplementedException(); } public void Validate(ServiceEndpoint endpoint) { throw new NotImplementedException(); } #endregion } } So first I put this code in my client WinForms application, but then I had problems signing it, because I had to sign also all third-party references eventhough http://msdn.microsoft.com/en-us/library/h4fa028b(v=VS.80).aspx at section "What Should Not Be Strong-Named" states: In general, you should avoid strong-naming application EXE assemblies. A strongly named application or component cannot reference a weak-named component, so strong-naming an EXE prevents the EXE from referencing weak-named DLLs that are deployed with the application. For this reason, the Visual Studio project system does not strong-name application EXEs. Instead, it strong-names the Application manifest, which internally points to the weak-named application EXE. I expected VS to avoid this problem, but I had no luck there, it complained about all the unsigned references, so I created a separate "WCF Service Library" project inside my solution containing only code above and signed that one. At this point entire solution compiled just okay. And here's my problem: When I fired up "WCF Service Configuration Editor" I was able to add new behavior element extension (say "AuthExtension"), but then when I tried to add that extension to my end point behavior it gives me: Exception has been thrown by the target of an invocation. So I'm stuck here. Any ideas?

    Read the article

  • How can you use Windows Backup with a TrueCrypt encrypted backup destination?

    - by Burly
    Background There are numerous backup solutions out there for Windows and they come in many different forms. From a file copy and/or syncing tool like SyncBackSE to whole hard drive backup utilities based on Volume Shadow Copy like Acronis TrueImage or Norton Ghost to block level copy tools like dd. Each of these solutions offers different pros and cons versus the "Windows Backup and Restore Center" feature built-in to Windows Vista and Windows 7. I am not interested in discussing alternative backup solutions here however, as that has already been covered by numerous other questions. Contraints There are two "types" of backup supported by the "Windows Backup and Restore Center"(WBRC): - File backup (which Windows calls "Back Up Files") - Full System Backup (which Windows calls "Complete PC Backup) I am interested in a solution which supports either and/or both types of backup with WBRC. Questions How can you use a TrueCrypt encrypted mount point as the destination for the built-in "Windows Backup and Restore Center" feature in Windows Vista and 7? See-Also Volume Shadow Copy based backup that works with TrueCrypt References Backup and Restore Center Windows Vista - Backup and Restore Center Windows 7 - Backup and Restore Center

    Read the article

  • Make bootable iso with grub chainloading

    - by hlovdal
    My parents' pc has windows 7's boot manager installed at MBR and grub2 is installed on /dev/sda2 (booting linux on /dev/sda2). I want to make a bootable cd so that when booted from it just chainloads into the boot manager on the second partition. I assume using grub rather than grub2 for this will be simpler, using the configuration timeout=0 hiddenmenu default=0 title grub2 (/dev/sda2) rootnoverify (hd0,1) chainloader (hd0,1)+1 I know I can make a bootable linux cd in various ways, but that is not what I want. I just want to put grub/grub2 on the cd, no kernels or programs. The question is how do I make the iso file? I have found some references to installing on a floppy or usb disk, but all those assume the device is present when running the grub install commands. A iso file is different.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >