Search Results

Search found 34016 results on 1361 pages for 'static content'.

Page 274/1361 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • Speed Up the Help Dialog in Windows and Office

    - by Matthew Guay
    When you click help, you don’t want to wait for your computer to bring it to you.  Here’s how you can speed up the help dialog in Windows and Office. If you have a slow internet connection, chances are you’ve been frustrated by the Help dialog in Windows and Office trying to download fresh content every time you open them. This can be great if the updated help files contain better content, but sometimes you just want to find what you were looking for without waiting.  Here’s how you can turn off the automatic online help. Use Local Help in Windows Windows 7 and Vista’s help dialog usually tries to load the latest content from the net, but this can take a long time on slow connections. If you’re seeing the above screen a lot, you may want to switch to offline help.  Click the “Online Help” button at the bottom, and select “Get offline Help”. Now your computer will just load the pre-installed help files.  And don’t worry; if there’s a major update to your help files, Windows will download and install it through Windows Update.   Stupid Geek Tip: An easy way to open Windows Help is to click on your desktop or Start Menu and press F1 on your keyboard. Use Local Help in Office This same trick works in Office 2007 and 2010.  We’ve actually had more problems with Office’s help being tardy. Solve this the same way as with Windows help.  Click on the “Connected to Office.com” or “Connected to Office Online” button, depending on your version of Office, and select “Show content only from this computer”. This will automatically change the settings for Help in all of your Office applications. While this may not be a major trick, it can be helpful especially if you have a slow internet connection and want to get things done quickly.  Similar Articles Productive Geek Tips How to See the About Dialog and Version Information in Office 2007Speed Up SATA Hard Drives in Windows VistaMake Mouse Navigation Faster in WindowsSpeed up Your Windows Vista Computer with ReadyBoostSet the Speed Dial as the Opera Startup Page TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • Migration from one domain to another - Transfering the social media stats

    - by Dipak Saraf
    I am planning to move my site from one domain to another i.e from domain a.com to b.com . The site also has a lot of content and the migration of content is not an issue. The 301 redirect will take care of all the backlinks also. But my real worry is transfer the social media shares links and stats from domain a.com to b.com. I need some insight or any way in which the same can be migrated seamlessly from domain a.com to b.com

    Read the article

  • Oracle ADF Essentials & ADF training material now on the iPad By Grant Ronald

    - by JuergenKress
    Faster and Simpler Java-based Application Development - Now Free Oracle ADF Essentials is an end-to-end Java EE framework that simplifies application development by providing out-of-the-box infrastructure services and a visual and declarative development experience. Oracle ADF Essentials is free to develop and deploy. Oracle ADF Essentials Overview Demo Tutorial - Using Oracle ADF Essentials with JPA/EJB and JSF Oracle ADF Essentials FAQ Introduction to Oracle ADF Seminar Tutorial - Developing with Oracle ADF Essentials ADF training material now on the iPad By Grant Ronald My team has developed about a weeks worth of ADF training material under the title ADF Insider and ADF Insider Essentials. This is available from our page on OTN. But we are now loading all our content on YouTube as well so the content can now be accessed on iPads. Over the next couple of weeks we'll also add these YouTube links to the OTN page but in the meantime, if you have an interest in ADF I strongly urge you to subscribe to our ADFInsiderEssentials YouTube Channel so you can be alerted when new content comes on line. Please also provide your comments, thumbs up/down, and let us know what content/topics is of your interest. GlassFish Extension for Oracle JDeveloper by Shay Shmeltzer We just release a new version of Oracle JDeveloper - 11.1.2.3. One new feature here is built-in support for GlassFish. This include the ability to create an "application server" connection to GlassFish and then deploy to that server with one click from inside JDeveloper. You can use this for deploying Oracle ADF Essentials application on Glassfish, but you can also use it to deploy any Java EE application you build in JDeveloper on GlassFish. However, if you are planning to work with GlassFish and JDeveloper on a more regular basis as your development server, then you might find my new extension useful. The new extension allows you to start and stop an external GlassFish instance, as well as start it in debug mode (which will allow JDeveloper to remotely debug your application as it runs on the server. I also added a button that will invoke the web admin console of Glassfish. Here is a quick demo that will show you how to work with the extension. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: adf training,adf,grant Ronald,adf essential,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Workaround: XNA 4 importing only part of 3d model from FBX

    - by Vitus
    Recently I found a problem with importing 3D models from FBX files: it sometimes imported partly. That is when you draw a 3D model, loaded from FBX file, processed by content pipeline, you got only part of meshes. “Sometimes” means that you got this error only for some files. Results of my research below. For example, I have 10Mb binary FBX file with a model, looks like: And when I load it, result Model instance contains only part of meshes and looks like: Because models from other files imported normally, I think that it’s a “bad format” file. When you add FBX file to your XNA Content project and build it, imported file processing by XNA Fbx Importer & Processor. On MSDN I found that FbxImporter designed to work with 2006.11 version of FBX format. My file is FBX 2012 format. Ok, I need to convert it to 2006 format. It can be done by using Autodesk FBX Converter 2012.1. I tried to convert it to other versions of FBX formats, but without success. And I also tried to import my FBX file to 3D MAX, and it imported correctly. Then I export model using 3D MAX, and it generate me other FBX, which I add to my XNA project. After that I got full model, that rendered well! So, internal data structure of FBX file is more important for right XNA import, than it version! Unfortunately, Autodesk FBX is not an open file format. If you want to work with FBX, you should use Autodesk FBX SDK. This way you can manually read content of FBX file, and use it everyway. Then I tried to convert my source FBX file to DAE Collada, and result DAE file back to FBX, using FBX Converter (FBX –> DAE –> FBX). The result FBX file can be imported normally.   Conclusion: XNA FbxImporter correct work doesn't depend on version (2006, 2011, etc) and form (binary, ascii) of FBX file. Internal FBX data structure much more important. To make FBX "readable" for XNA Importer you can use double conversion like FBX -> Collada -> FBX You also can use FBX SDK to manually load data from FBX P.S. Autodesk FBX Converter 2012 is more, than simple converter. It provide you tools like: FBX Explorer, which show you structure of FBX file; FBX Viewer, which render content of FBX and provide basic intercation like model move and zoom; FBX Take Manager, which allow to work with embedded animations

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Conferences: Starting the round for 2011 with Mix

    - by Enrique Lima
    There are several conferences lining up for 2011.  There are some private conferences I will be participating in and some other where there is an invitation to submit content for consideration.  That is the case with Mix 2011. The date:  April 12-14, 2011 The venue: Mandalay Bay, Las Vegas Here is the general information: http://live.visitmix.com/ To submit content: http://live.visitmix.com/opencall

    Read the article

  • WebCenter Customer Spotlight: Ferrous Resources do Brasil S.A.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryFerrous Resources do Brasil S.A. (Ferrous) is a startup company whose core business is the exploration, prospection, exploitation, and commercialization of iron ore. They wanted to create an effective, secure and scalable document management system to support the company’s new iron ore exploration operations in Brazil. Ferrous worked with the Oracle Partner 2D Tecnologia to implement a centralized document management system using  Oracle WebCenter Content. The single repository hold almost 220,000 files with an expected to growth to 8 million files in the next two years.  The solution has reduced  financial audit reporting from two weeks to only four days. Company OverviewFounded in 2007, Ferrous Resources do Brasil S.A. (Ferrous) is a startup company whose core business is the exploration, prospection, exploitation, and commercialization of iron ore. Ferrous intends to become one of the five largest iron ore mining companies in the world within the next few years.  Business ChallengesFerrous wanted to create an effective, secure and scalable document management system to support the company’s new iron ore exploration operations in Brazil. Solution DeployedFerrous worked with the Oracle Partner 2D Tecnologia to implement a centralized document management system using  Oracle WebCenter Content. They consolidated all company documents into a single repository to hold almost 220,000 files, including iron-ore project layout and pictures for a repository that is expected to grow to 8 million files in the next two years. Business Results Gained access to reports on individual files of pictures, project layouts, text files, spreadsheets, and slides–enabling the company to find out who opened and altered each  file and when, as well as to access previous versions Enabled investors and board of directors abroad to access all company documents via a Web portal, something that was previously achieved only through e-mails or CD file transfers Enabled the company to consolidate all files, which were mostly disseminated in pen drives and desktops, so that they are now available to more than 500 system users, including investors, lawyers, partners, and 320 in-company users Reduced time to search specific documents, saving several days in financial audit reporting, an activity that previously took two weeks and now requires only four days  “With Oracle WebCenter Content, we managed to organize, control, and protect the company’s files since the beginning of operations and, as a consequence, can offer rapid and transparent access to all company documents.” Frederico Samartini, Business Performance Manager, Ferrous Resources do Brasil S.A. Additional Information Ferrous Customer Snapshot Oracle WebCenter Content

    Read the article

  • Pluggable Rules for Entity Framework Code First

    - by Ricardo Peres
    Suppose you want a system that lets you plug custom validation rules on your Entity Framework context. The rules would control whether an entity can be saved, updated or deleted, and would be implemented in plain .NET. Yes, I know I already talked about plugable validation in Entity Framework Code First, but this is a different approach. An example API is in order, first, a ruleset, which will hold the collection of rules: 1: public interface IRuleset : IDisposable 2: { 3: void AddRule<T>(IRule<T> rule); 4: IEnumerable<IRule<T>> GetRules<T>(); 5: } Next, a rule: 1: public interface IRule<T> 2: { 3: Boolean CanSave(T entity, DbContext ctx); 4: Boolean CanUpdate(T entity, DbContext ctx); 5: Boolean CanDelete(T entity, DbContext ctx); 6: String Name 7: { 8: get; 9: } 10: } Let’s analyze what we have, starting with the ruleset: Only has methods for adding a rule, specific to an entity type, and to list all rules of this entity type; By implementing IDisposable, we allow it to be cancelled, by disposing of it when we no longer want its rules to be applied. A rule, on the other hand: Has discrete methods for checking if a given entity can be saved, updated or deleted, which receive as parameters the entity itself and a pointer to the DbContext to which the ruleset was applied; Has a name property for helping us identifying what failed. A ruleset really doesn’t need a public implementation, all we need is its interface. The private (internal) implementation might look like this: 1: sealed class Ruleset : IRuleset 2: { 3: private readonly IDictionary<Type, HashSet<Object>> rules = new Dictionary<Type, HashSet<Object>>(); 4: private ObjectContext octx = null; 5:  6: internal Ruleset(ObjectContext octx) 7: { 8: this.octx = octx; 9: } 10:  11: public void AddRule<T>(IRule<T> rule) 12: { 13: if (this.rules.ContainsKey(typeof(T)) == false) 14: { 15: this.rules[typeof(T)] = new HashSet<Object>(); 16: } 17:  18: this.rules[typeof(T)].Add(rule); 19: } 20:  21: public IEnumerable<IRule<T>> GetRules<T>() 22: { 23: if (this.rules.ContainsKey(typeof(T)) == true) 24: { 25: foreach (IRule<T> rule in this.rules[typeof(T)]) 26: { 27: yield return (rule); 28: } 29: } 30: } 31:  32: public void Dispose() 33: { 34: this.octx.SavingChanges -= RulesExtensions.OnSaving; 35: RulesExtensions.rulesets.Remove(this.octx); 36: this.octx = null; 37:  38: this.rules.Clear(); 39: } 40: } Basically, this implementation: Stores the ObjectContext of the DbContext to which it was created for, this is so that later we can remove the association; Has a collection - a set, actually, which does not allow duplication - of rules indexed by the real Type of an entity (because of proxying, an entity may be of a type that inherits from the class that we declared); Has generic methods for adding and enumerating rules of a given type; Has a Dispose method for cancelling the enforcement of the rules. A (really dumb) rule applied to Product might look like this: 1: class ProductRule : IRule<Product> 2: { 3: #region IRule<Product> Members 4:  5: public String Name 6: { 7: get 8: { 9: return ("Rule 1"); 10: } 11: } 12:  13: public Boolean CanSave(Product entity, DbContext ctx) 14: { 15: return (entity.Price > 10000); 16: } 17:  18: public Boolean CanUpdate(Product entity, DbContext ctx) 19: { 20: return (true); 21: } 22:  23: public Boolean CanDelete(Product entity, DbContext ctx) 24: { 25: return (true); 26: } 27:  28: #endregion 29: } The DbContext is there because we may need to check something else in the database before deciding whether to allow an operation or not. And here’s how to apply this mechanism to any DbContext, without requiring the usage of a subclass, by means of an extension method: 1: public static class RulesExtensions 2: { 3: private static readonly MethodInfo getRulesMethod = typeof(IRuleset).GetMethod("GetRules"); 4: internal static readonly IDictionary<ObjectContext, Tuple<IRuleset, DbContext>> rulesets = new Dictionary<ObjectContext, Tuple<IRuleset, DbContext>>(); 5:  6: private static Type GetRealType(Object entity) 7: { 8: return (entity.GetType().Assembly.IsDynamic == true ? entity.GetType().BaseType : entity.GetType()); 9: } 10:  11: internal static void OnSaving(Object sender, EventArgs e) 12: { 13: ObjectContext octx = sender as ObjectContext; 14: IRuleset ruleset = rulesets[octx].Item1; 15: DbContext ctx = rulesets[octx].Item2; 16:  17: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Added)) 18: { 19: Object entity = entry.Entity; 20: Type realType = GetRealType(entity); 21:  22: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 23: { 24: if (rule.CanSave(entity, ctx) == false) 25: { 26: throw (new Exception(String.Format("Cannot save entity {0} due to rule {1}", entity, rule.Name))); 27: } 28: } 29: } 30:  31: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Deleted)) 32: { 33: Object entity = entry.Entity; 34: Type realType = GetRealType(entity); 35:  36: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 37: { 38: if (rule.CanDelete(entity, ctx) == false) 39: { 40: throw (new Exception(String.Format("Cannot delete entity {0} due to rule {1}", entity, rule.Name))); 41: } 42: } 43: } 44:  45: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Modified)) 46: { 47: Object entity = entry.Entity; 48: Type realType = GetRealType(entity); 49:  50: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 51: { 52: if (rule.CanUpdate(entity, ctx) == false) 53: { 54: throw (new Exception(String.Format("Cannot update entity {0} due to rule {1}", entity, rule.Name))); 55: } 56: } 57: } 58: } 59:  60: public static IRuleset CreateRuleset(this DbContext context) 61: { 62: Tuple<IRuleset, DbContext> ruleset = null; 63: ObjectContext octx = (context as IObjectContextAdapter).ObjectContext; 64:  65: if (rulesets.TryGetValue(octx, out ruleset) == false) 66: { 67: ruleset = rulesets[octx] = new Tuple<IRuleset, DbContext>(new Ruleset(octx), context); 68: 69: octx.SavingChanges += OnSaving; 70: } 71:  72: return (ruleset.Item1); 73: } 74: } It relies on the SavingChanges event of the ObjectContext to intercept the saving operations before they are actually issued. Yes, it uses a bit of dynamic magic! Very handy, by the way! So, let’s put it all together: 1: using (MyContext ctx = new MyContext()) 2: { 3: IRuleset rules = ctx.CreateRuleset(); 4: rules.AddRule(new ProductRule()); 5:  6: ctx.Products.Add(new Product() { Name = "xyz", Price = 50000 }); 7:  8: ctx.SaveChanges(); //an exception is fired here 9:  10: //when we no longer need to apply the rules 11: rules.Dispose(); 12: } Feel free to use it and extend it any way you like, and do give me your feedback! As a final note, this can be easily changed to support plain old Entity Framework (not Code First, that is), if that is what you are using.

    Read the article

  • Class Design -- Multiple Calls from One Method or One Call from Multiple Methods?

    - by Andrew
    I've been working on some code recently that interfaces with a CMS we use and it's presented me with a question on class design that I think is applicable in a number of situations. Essentially, what I am doing is extracting information from the CMS and transforming this information into objects that I can use programatically for other purposes. This consists of two steps: Retrieve the data from the CMS (we have a DAL that I use, so this is essentially just specifying what data from the CMS I want--no connection logic or anything like that) Map the parsed data to my own [C#] objects There are basically two ways I can approach this: One call from multiple methods public void MainMethodWhereIDoStuff() { IEnumerable<MyObject> myObjects = GetMyObjects(); // Do other stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); List<MyObject> mappedObjects = new List<MyObject>(); // do stuff to map the CmsDataItems to MyObjects return mappedObjects; } private static IEnumerable<CmsDataItem> GetCmsDataItems() { List<CmsDataItem> cmsDataItems = new List<CmsDataItem>(); // do stuff to get the CmsDataItems I want return cmsDataItems; } Multiple calls from one method public void MainMethodWhereIDoStuff() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); IEnumerable<MyObject> myObjects = GetMyObjects(cmsDataItems); // do stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects(IEnumerable<CmsDataItem> itemsToMap) { // ... } private static IEnumerable<CmsDataItem> GetCmsDataItems() { // ... } I am tempted to say that the latter is better than the former, as GetMyObjects does not depend on GetCmsDataItems, and it is explicit in the calling method the steps that are executed to retrieve the objects (I'm concerned that the first approach is kind of an object-oriented version of spaghetti code). On the other hand, the two helper methods are never going to be used outside of the class, so I'm not sure if it really matters whether one depends on the other. Furthermore, I like the fact that in the first approach the objects can be retrieved from one line-- most likely anyone working with the main method doesn't care how the objects are retrieved, they just need to retrieve the objects, and the "daisy chained" helper methods hide the exact steps needed to retrieve them (in practice, I actually have a few more methods but am still able to retrieve the object collection I want in one line). Is one of these methods right and the other wrong? Or is it simply a matter of preference or context dependent?

    Read the article

  • Creando un Menu Accordeon con Ajax

    - by jaullo
    Ajax, es uno de los grandes componentes nacidos para utilizar en asp.net que brinda gran cantidad de funcionalidades y potencia nuestras aplicaciones, brindando sencilles y agilidad. Este post, esta dedicado a la creación de un menú tipo accordeon con ajax. Como bien sabemos, para poder utilizar cualquiera de los componentes ajax, es necesario que exista un scriptmanager registrado en nuestra página, el cual será el encargado de manejar nuestros controles. Entonces, lo primero que haremos será crear nuestro script manager.  <asp:ScriptManager ID="ScriptManager1" runat="server">     </asp:ScriptManager> Seguidamente definimos nuestro elemento accordeon y establecemos algunas de sus propiedades básicas:   <cc1:Accordion ID="AccordionCtrl" runat="server"         SelectedIndex="0" HeaderCssClass="accordionHeader"         ContentCssClass="accordionContent" AutoSize="None"         FadeTransitions="true" TransitionDuration="250"     FramesPerSecond="40" Para que nuestro accordeon funcione debemos declarar PANES dentro de el, estos panes serán los encargados de contener los elementos, vinculos o información que deseamos mostrar.   <Panes>                 <cc1:AccordionPane ID="AccordionPane0" runat="server">                     <Header>Matenimiento</Header>                     <Content>                         <li><a href="mypagina.aspx">My página de prueba</a></li>                                                                                          </Content>                 </cc1:AccordionPane> Como vemos podemos declarar tantos accordionPanes como queramos, cada accordionPane representa un elemento de categoría dentro del accordeon. Por útlimo debemos cerrar los elementos panel y accordion que abrirmos inicialmente.  </Panes>  </cc1:Accordion> Nuesto ejemplo finalmente completo debería verse así: <asp:ScriptManager ID="ScriptManager1" runat="server">     </asp:ScriptManager>         <cc1:Accordion ID="AccordionCtrl" runat="server"         SelectedIndex="0" HeaderCssClass="accordionHeader"         ContentCssClass="accordionContent" AutoSize="None"         FadeTransitions="true" TransitionDuration="250"     FramesPerSecond="40" >             <Panes>                 <cc1:AccordionPane ID="AccordionPane0" runat="server">                     <Header>Matenimiento</Header>                     <Content>                         <li><a href="mypagina.aspx">My página de prueba</a></li>                                                                                          </Content>                 </cc1:AccordionPane>                                                             </Panes>         </cc1:Accordion>         De esta forma, nuestro Menu tipo accordeon debería estar funcionando, una forma sencilla y agil de crear un menú en asp.net con Ajax.

    Read the article

  • Oracle WebCenter in Action: Best Practices from Oracle Consulting

    - by Kellsey Ruppel
    Oracle WebCenter in Action: Best Practices from Oracle ConsultingSee concrete, real-world examples of deployments throughout the Oracle WebCenter stack. Oracle Consulting will lead you through a discussion about best practices and key customer use cases, as well as offer practical tips to support web experience management, enterprise content management, and portal deployments.Watch this webcast as our presenters discuss: Best practices for deployments of large complex architectures with Oracle WebCenter Sites Key deployments and helpful hints for Oracle WebCenter Content Performance tuning takeaways when using Oracle WebCenter Portal Watch the webcast by registering now. REGISTER NOW

    Read the article

  • WebCenter Customer Spotlight: Los Angeles Department of Water and Power

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution Summary Los Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. The goal of the project was to implement a newly designed web portal to increase customer self-service while reducing transactions via IVR and automate many of the paper based processes to web based workflows for their 1.6 million customers. LADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content and Oracle SOA Suite for the integration of their complex back-end systems infrastructure. The new portal has received extremely positive feedback from not only the customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Company OverviewLos Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. LADWP also bills most of these customers for sanitation services provided by another department in the city of Los Angeles.  Business ChallengesThe goal of the project was to implement a newly designed web portal that is easy to navigate from a web browser and mobile devices, as well as be the platform for surfacing internet and intranet applications at LADWP. The primary objective of the new portal was to increase customer self-service while reducing the transactions via IVR and walk-up and to automate many of the paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account (Bill Pay, Payment History, Bill History, Usage analysis, Service Request Management) Financial Assistance Programs Customer Rebate Programs Turn Off/Turn On/Transfer of Services Outage Reporting eNotification (SMS, email) Solution DeployedLADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content. Using Oracle SOA Suite they integrated various back-end systems including Oracle Siebel CRM IBM Mainframe based CIS FILENET for document management EBP Eletronic Bill Payment System HP Imprint System for BillXML data Other systems including outage reporting systems, SMS service, etc. The new portal’s features include: Complete Graphical redesign based on best practices in UI Design for high usability Customer Self Service implemented through MyAccount (Bill Pay, Payment History, Bill History, Usage Analysis, Service Request Management) Financial Assistance Programs (CRM, WebCenter) Customer Rebate Programs (CRM, WebCenter) Turn On/Off/Transfer of services (Commercial & Residential) Outage Reporting eNotification (SMS, email) Multilingual (English & Spanish) – using WebCenter multi-language support Section 508 (ADA) Compliant Search – Using WebCenter SES (Secured Enterprise Search) Distributed Authorship in WebCenter Content Mobile Access (any Mobile Browser) Business ResultsThe new portal has received extremely positive feedback from not only customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Additional Information LADWP OpenWorld presentation Oracle WebCenter Portal Oracle WebCenter Content Oracle SOA Suite

    Read the article

  • Beware of const members

    - by nmarun
    I happened to learn a new thing about const today and how one needs to be careful with its usage. Let’s say I have a third-party assembly ‘ConstVsReadonlyLib’ with a class named ConstSideEffect.cs: 1: public class ConstSideEffect 2: { 3: public static readonly int StartValue = 10; 4: public const int EndValue = 20; 5: } In my project, I reference the above assembly as follows: 1: static void Main(string[] args) 2: { 3: for (int i = ConstSideEffect.StartValue; i < ConstSideEffect.EndValue; i++) 4: { 5: Console.WriteLine(i); 6: } 7: Console.ReadLine(); 8: } You’ll see values 10 through 19 as expected. Now, let’s say I receive a new version of the ConstVsReadonlyLib. 1: public class ConstSideEffect 2: { 3: public static readonly int StartValue = 5; 4: public const int EndValue = 30; 5: } If I just drop this new assembly in the bin folder and run the application, without rebuilding my console application, my thinking was that the output would be from 5 to 29. Of course I was wrong… if not you’d not be reading this blog. The actual output is from 5 through 19. The reason is due to the behavior of const and readonly members. To begin with, const is the compile-time constant and readonly is a runtime constant. Next, when you compile the code, a compile-time constant member is replaced with the value of the constant in the code. But, the IL generated when you reference a read-only constant, references the readonly variable, not its value. So, the IL version of the Main method, after compilation actually looks something like: 1: static void Main(string[] args) 2: { 3: for (int i = ConstSideEffect.StartValue; i < 20; i++) 4: { 5: Console.WriteLine(i); 6: } 7: Console.ReadLine(); 8: } I’m no expert with this IL thingi, but when I look at the disassembled code of the exe file (using IL Disassembler), I see the following: I see our readonly member still being referenced by the variable name (ConstVsReadonlyLib.ConstSideEffect::StartValue) in line 0001. Then there’s the Console.WriteLine in line 000b and finally, see the value of 20 in line 0017. This, I’m pretty sure is our const member being replaced by its value which marks the upper bound of the ‘for’ loop. Now you know why the output was from 5 through 19. This definitely is a side-effect of having const members and one needs to be aware of it. While we’re here, I’d like to add a few other points about const and readonly members: const is slightly faster, but is less flexible readonly cannot be declared within a method scope const can be used only on primitive types (numbers and strings) Just wanted to share this before going to bed!

    Read the article

  • web hosting locally

    - by Pradyut Bhattacharya
    i have made a website and hosted in my local computer using a static ip where can i buy a domain name such as www.something.com such that it can redirect to my static ip so that if i m using a page like a http://localhost/index.jsp it can be accessed by http://www.something.com/index.jsp does it matter if i run the server locally or i buy a managed web hosting server from a big company if the traffic is low on my site?? thanks

    Read the article

  • For On Page SEO, This Works Like Crazy

    When it comes to the search engines we need to be making sure that our website tells the search engines exactly what it is and what the content contained within is about. Once we have this right, which will come from the on page optimization and content, we need to build backlinks to our website which increases the popularity of our website in the search engines eyes and lifts us up through the rankings.

    Read the article

  • SCORM and the Learning Management System (LMS)

    What actually is SCORM? SCORM, Shareable Content Object Reference Model, is a standard for web-based e-learning that has been developed to define communication between client-side content and a runti... [Author: Stuart Campbell - Computers and Internet - October 05, 2009]

    Read the article

  • How Cheap Websites Can Save You a Ton of Money!

    OK, so you need a website, and you have already started looking at cheap websites to buy, but do you know what is actually involved in getting it set up and running, so it is ready for you to start adding content. In fact, do you even know that you need to add content or are you assuming that someone else will do this for you? So read on to find out the best way to save a ton of money by buying a cheap website.

    Read the article

  • Why was Android's ContentProvider created?

    - by satur9nine
    The title sums up my question, but to elaborate basically what I want to understand is why the Android designers want apps that need to work with shared data to use a Content Provider rather than just accessing the SQLite database directly? The only reason I can think of is security because certain files can by accessed only be certain processes and in that way the Content Provider is the gatekeeper that ensures each app has the proper privileges before allowing read and/or write access to the database file. Is that the primary reason why ContentProvider was created?

    Read the article

  • Easy SEO For Beginners

    This may sound surprising but SEO experts find that getting links from other websites is just as important (if not more) as on page factors such as actual content. Imagine that, inbound links more powerful or meaningful than actual content.

    Read the article

  • To disallow indexing the category and tag listings in a blog

    - by Mert Nuhoglu
    Mark Wilson says that category and tag listings in a blog should be disallowed in order to prevent duplicate content. I understand this. However, I want to put internal links on keywords in the blog posts to the tag and category pages in order for the readers to find more relevant content. I wonder whether putting those internal links to the category/tag pages which are disallowed in robots.txt is counted as useful from the perspective of SEO internal linking?

    Read the article

  • How to unicode Myanmar texts on Java? [closed]

    - by Spacez Ly Wang
    I'm just beginner of Java. I'm trying to unicode (display) correctly Myanmar texts on Java GUI ( Swing/Awt ). I have four TrueType fonts which support Myanmar unicode texts. There are Myanmar3, Padauk, Tharlon, Myanmar Text ( Window 8 built-in ). You may need the fonts before the code. Google the fonts, please. Each of the fonts display on Java GUI differently and incorrectly. Here is the code for GUI Label displaying myanmar texts: ++++++++++++++++++++++++ package javaapplication1; import javax.swing.JFrame; import javax.swing.JTextField; public class CusFrom { private static void createAndShowGUI() { JFrame frame = new JFrame("Hello World Swing"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); String s = "\u1015\u102F \u103C\u1015\u102F"; JLabel label = new JLabel(s); label.setFont(new java.awt.Font("Myanmar3", 0, 20));// font insert here, Myanmar Text, Padauk, Myanmar3, Tharlon frame.getContentPane().add(label); frame.pack(); frame.setVisible(true); } public static void main(String[] args) { javax.swing.SwingUtilities.invokeLater(new Runnable() { public void run() { createAndShowGUI(); } }); } } ++++++++++++++++++++++++ Outputs vary. See the pictures: Myanmar3 IMG Padauk IMG Tharlon IMG Myanmar Text IMG What is the correct form? (on notepad) Well, next is the code for GUI Textfield inputting Myanmar texts: ++++++++++++++++++++++++ package javaapplication1; import javax.swing.JFrame; import javax.swing.JTextField; public class XusForm { private static void createAndShowGUI() { JFrame frame = new JFrame("Frame Title"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JTextField textfield = new JTextField(); textfield.setFont(new java.awt.Font("Myanmar3", 0, 20)); frame.getContentPane().add(textfield); frame.pack(); frame.setVisible(true); } public static void main(String[] args) { javax.swing.SwingUtilities.invokeLater(new Runnable() { public void run() { createAndShowGUI(); } }); } } ++++++++++++++++++++++++ Outputs vary when I input keys( unicode text ) on keyboards. Myanmar Text Output IMG Padauk Output IMG Myanmar3 Output IMG Tharlon Output IMG Those fonts work well on Linux when opening text files with Text Editor application. My Question is how to unicode Myanmar texts on Java GUI. Do I need additional codes left to display well? Or Does Java still have errors? The fonts display well on Web Application (HTML, CSS) but I'm not sure about displaying on Java Web Application.

    Read the article

  • Help with "Cannot find ContentTypeReader BB.HeightMapInfoReader, BB, Version=1.0.0.0, Culture=neutral." needed

    - by rFactor
    Hi, I have this irritating problem in XNA that I have spent my Saturday with: Cannot find ContentTypeReader BB.HeightMapInfoReader, BB, Version=1.0.0.0, Culture=neutral. It throws me that when I do (within the game assembly's Renderer.cs class): this.terrain = this.game.Content.Load<Model>("heightmap"); There is a heightmap.bmp and I don't think there's anything wrong with it, because I used it in a previous version which I switched to this new better system. So, I have a GeneratedGeometryPipeline assembly that has these classes: HeightMapInfoContent, HeightMapInfoWriter, TerrainProcessor. The GeneratedGeometryPipeline assembly does not reference any other assemblies under the solution. Then I have the game assembly that neither references any other solution assemblies and has these classes: HeightMapInfo, HeightMapInfoReader. All game assembly classes are under namespace BB and the GeneratedGeometryPipeline classes are under the namespace GeneratedGeometryPipeline. I do not understand why it does not find it. Here's some code from the GeneratedGeometryPipeline.HeightMapInfoWriter: /// <summary> /// A TypeWriter for HeightMapInfo, which tells the content pipeline how to save the /// data in HeightMapInfo. This class should match HeightMapInfoReader: whatever the /// writer writes, the reader should read. /// </summary> [ContentTypeWriter] public class HeightMapInfoWriter : ContentTypeWriter<HeightMapInfoContent> { protected override void Write(ContentWriter output, HeightMapInfoContent value) { output.Write(value.TerrainScale); output.Write(value.Height.GetLength(0)); output.Write(value.Height.GetLength(1)); foreach (float height in value.Height) { output.Write(height); } foreach (Vector3 normal in value.Normals) { output.Write(normal); } } /// <summary> /// Tells the content pipeline what CLR type the /// data will be loaded into at runtime. /// </summary> public override string GetRuntimeType(TargetPlatform targetPlatform) { return "BB.HeightMapInfo, BB, Version=1.0.0.0, Culture=neutral"; } /// <summary> /// Tells the content pipeline what worker type /// will be used to load the data. /// </summary> public override string GetRuntimeReader(TargetPlatform targetPlatform) { return "BB.HeightMapInfoReader, BB, Version=1.0.0.0, Culture=neutral"; } } Can someone help me out?

    Read the article

  • How do search engines segment against locale?

    - by Hope I Helped
    Assume I run a website with multiple language modes. If I had a Spanish section, it should be included in Spanish-segmented search engines such as Google Spain, Google Peru, Google El Salvador, etc. and excluded in the others. Likewise, even though the website would have content in Chinese, multilingual countries such as Singapore should feature content in their main language (English in this case). What is the best approach to ensure the appropriate language is associated with the various geographically segmented search engines?

    Read the article

  • Strange Google Analytics result when new site launched

    - by Howard
    I have a web site which is mainly contains a few pages, and now we revamp-ed a new site which contains several hundred pages. We have Strange Google Analytics result, as follow: Before: Traffic sources (all traffic): 674 Content (all pages, unique PV): 674 After: Traffic sources (all traffic): 291 Content (all pages, unique PV): 1235 As you can see, the unique PV has increased as expected (as we have more pages now and the site is better), but why the traffic sources is lower and has a large gap? Any idea?

    Read the article

  • Redpaper Available: Collaboration Platform in PeopleSoft

    - by matthew.haavisto
    With the availability of Related Content and Collaborative Workspaces as part of PeopleTools 8.50, there is increasing need and demand for understanding how to set up collaborative capabilities in the PeopleSoft platform. To help with that, we've recently published a redpaper that can help you understand how to set up Related Content sevices and Collaborative Workspaces for all your PeopleSoft applications. You can find it on My Oracle Support here. The redpaper is a nice guide, and you can also find more information on these subjects in PeopleBooks.

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >