Search Results

Search found 51884 results on 2076 pages for 'custom type'.

Page 547/2076 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • How do I access the preferences from my main dialog window? Also how do I add a new preference?

    - by Captain_Glen
    class PreferencesCalorieBurnerDialog(PreferencesDialog): __gtype_name__ = "PreferencesCalorieBurnerDialog" def finish_initializing(self, builder): # pylint: disable=E1002 """Set up the preferences dialog""" super(PreferencesCalorieBurnerDialog, self).finish_initializing(builder) # Bind each preference widget to gsettings settings = Gio.Settings("net.launchpad.calorie-burner") widget = self.builder.get_object('example_entry') settings.bind("example", widget, "text", Gio.SettingsBindFlags.DEFAULT) #Custom preference widget = self.builder.get_object('weight') settings.bind("weight", widget, "float", Gio.SettingsBindFlags.DEFAULT) Main Dialog self.PreferencesDialog.get_weight()???

    Read the article

  • How does a collision engine work?

    - by JXPheonix
    Original question: Click me How exactly does a collision engine work? This is an extremely broad question. What code keeps things bouncing against each other, what code makes the player walk into a wall instead of walk through the wall? How does the code constantly refresh the players position and objects position to keep gravity and collision working as it should? If you don't know what a collision engine is, basically it's generally used in platform games to make the player acutally hit walls and the like. There's the 2D type and the 3D type, but they all accomplish the same thing: collision. So, what keeps a collision engine ticking?

    Read the article

  • Data Transformation Pipeline

    - by davenewza
    I have create some kind of data pipeline to transform coordinate data into more useful information. Here is the shell of pipeline: public class PositionPipeline { protected List<IPipelineComponent> components; public PositionPipeline() { components = new List<IPipelineComponent>(); } public PositionPipelineEntity Process(Position position) { foreach (var component in components) { position = component.Execute(position); } return position; } public PositionPipeline RegisterComponent(IPipelineComponent component) { components.Add(component); return this; } } Every IPipelineComponent accepts and returns the same type - a PositionPipelineEntity. Code: public interface IPipelineComponent { PositionPipelineEntity Execute(PositionPipelineEntity position); } The PositionPipelineEntity needs to have many properties, many which are unused in certain components and required in others. Some properties will also have become redundant at the end of the pipeline. For example, these components could be executed: TransformCoordinatesComponent: Parse the raw coordinate data into a Coordinate type. DetermineCountryComponent: Determine and stores country code. DetermineOnRoadComponent: Determine and store whether coordinate is on a road. Code: pipeline .RegisterComponent(new TransformCoordinatesComponent()) .RegisterComponent(new DetermineCountryComponent()) .RegisterComponent(new DetermineOnRoadComponent()); pipeline.Process(positionPipelineEntity); The PositionPipelineEntity type: public class PositionPipelineEntity { // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLatitude { get; set; } // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLongitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLatitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLongitude { get; set; } // Set in DetermineCountryComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public string CountryCode { get; set; } // Set in DetermineOnRoadComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public bool OnRoad { get; set; } } Problems: I'm very concerned about the dependency that a component has on properties. The way to solve this would be to create specific types for each component. The problem then is that I cannot chain them together like this. The other problem is the order of components in the pipeline matters. There is some dependency. The current structure does not provide any static or runtime checking for such a thing. Any feedback would be appreciated.

    Read the article

  • Is nesting types considered bad practice?

    - by Rob Z
    As noted by the title, is nesting types (e.g. enumerated types or structures in a class) considered bad practice or not? When you run Code Analysis in Visual Studio it returns the following message which implies it is: Warning 34 CA1034 : Microsoft.Design : Do not nest type 'ClassName.StructueName'. Alternatively, change its accessibility so that it is not externally visible. However, when I follow the recommendation of the Code Analysis I find that there tend to be a lot of structures and enumerated types floating around in the application that might only apply to a single class or would only be used with that class. As such, would it be appropriate to nest the type sin that case, or is there a better way of doing it?

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Create association between informations

    - by Andrea Girardi
    I deployed a project some days ago that allow to extract some medical articles using the results of a questionnaire completed by a user. For instance, if I reply on questionnaire I'm affected by Diabetes type 2 and I'm a smoker, my algorithm extracts all articles related to diabetes bubbling up all articles contains information about Diabetes type 2 and smoking. Basically we created a list of topic and, for every topic we define a kind of "guideline" that allows to extract and order informations for a user. I'm quite sure there are some better way to put on relationship two content but I was not able to find them on network. Could you suggest my a model, algorithm or paper to better understand this kind of problem and that helps me to find a faster, and more accurate way to extract information for an user?

    Read the article

  • Can I use Wubi install on Windows 7 FDE?

    - by Michael Chapman
    I have a windows machine using Truecrypt 7.1a FDE. I would like to use wubi to install Ubuntu within windows. Will doing this cause any issues with my system booting up? From what I understand Wubi does not modify any bootloaders. All it does is modify some boot settings within windows. So in theory the Truecrypt custom bootloader will remain the same, and after I get through the truecrypt prompt, have the option of windows or Ubuntu right?

    Read the article

  • Is there any difference between processor and core?

    - by Salvador
    The following two command seems to give me different information about the same hardware srs@ubuntu:~$ cat /proc/cpuinfo | grep -e processor -e cores processor : 0 cpu cores : 4 processor : 1 cpu cores : 4 processor : 2 cpu cores : 4 processor : 3 cpu cores : 4 srs@ubuntu:~$ sudo dmidecode -t processor # dmidecode 2.9 SMBIOS 2.6 present. Handle 0x0004, DMI type 4, 42 bytes Processor Information Socket Designation: LGA1155 Type: Central Processor Family: <OUT OF SPEC> Manufacturer: Intel ID: A7 06 02 00 FF FB EB BF Version: Intel(R) Core(TM) i5-2500K CPU @ 3.30GHz Voltage: 1.0 V External Clock: 100 MHz Max Speed: 3800 MHz Current Speed: 3300 MHz Status: Populated, Enabled Upgrade: Other L1 Cache Handle: 0x0005 L2 Cache Handle: 0x0006 L3 Cache Handle: 0x0007 Serial Number: To Be Filled By O.E.M. Asset Tag: To Be Filled By O.E.M. Part Number: To Be Filled By O.E.M. Core Count: 4 Core Enabled: 1 Characteristics: 64-bit capable Until today I thought I had a single processor with 4 independent cores. I also thought that within each core can be used different threads.

    Read the article

  • Partial rendering control using JQuery in MVC 2

    This article show a web custom control that allows partial rendering using JQuery in a MVC 2 web application...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WCF web service with Neural Network

    - by Gary Frank
    I am developing a web service that performs object recognition. It will be available for testing as soon as enough code has been developed, and then officially when it is finished. It is based on a radically new type of artificial neural network that I designed. Its goal is to recognize any type of object within an image. Besides the WCF web service, the project will also create a website to test and demonstrate the web service. Here is a link with more information. http://www.indiegogo.com/VOR

    Read the article

  • Unknown error XNA cannot detect importer for "program.cs"

    - by Evan Kohilas
    I am not too sure what I have done to cause this, but even after undoing all my edits, this error still appears Error 1 Cannot autodetect which importer to use for "Program.cs". There are no importers which handle this file type. Specify the importer that handles this file type in your project. (filepath)\Advanced Pong\AdvancedPongContent\Program.cs Advanced Pong After receiving this error, everything between #if and #endif in the program.cs fades grey using System; namespace Advanced_Pong { #if WINDOWS || XBOX static class Program { /// <summary> /// The main entry point for the application. /// </summary> static void Main(string[] args) { using (Game1 game = new Game1()) { game.Run(); } } } #endif } I have searched this and could not find a solution anywhere. Any help is appreciated.

    Read the article

  • What would be the best mean for a gui with a lot of FX in Unity

    - by Lionel Barret
    The game I am working on (we are in R&D) is based almost exclusively on a windowed gui with a lot of FX (fading, growing, etc). We will also likely need custom widgets (like a sound recording graph). The game will be made with Unity and from what I heard, the default gui system has quite a bad rep, it is too slow for many usages. So, I wondering what would be the best way to do what we need.

    Read the article

  • Using Network load balancing to distribute load for SharePoint2010 – Part3 of building my own development SharePoint2010 Farm

    - by ybbest
    Part1 of building my own development SharePoint2010 Farm Part2 of building my own development SharePoint2010 Farm Part3 of building my own development SharePoint2010 Farm In my last post, I have installed SharePoint2010 in one of the server (WFE One) and configured using the OOB SharePoint configuration wizard. In this post I will show you how to use OOB windows network load balancing to distribute load for SharePoint2010 site. 1. Install SharePoint in another server WFE Two (you can follow the steps in my last post), but instead of choosing create new Farm, you need to select “connect to existing farm” this time. 2. Click next then click retrieve database names button and select the farm configuration database. 3. Click next and enter the passphrase you specified when you first installed the SharePoint Farm. 4. Click the advanced settings and select Use this machine to host the web site. 5. Click OK to finish the configurations 6. Next, Install NLB in the two WFE (web front end) SharePoint servers 7. Configure NLB to create the cluster. Go to Start—Administrative Tools—Network Load Balancing Manager 8. Right-click the Network Load Balancing Clusters Node and select New Cluster. 9. Type in the host name that is to be part of the new cluster. 10. Type in the IP address for the cluster. 11. Select the Multicast for this cluster.(The default one is Unicast) 12. You can configure the Port Rules for the clustering , but I will leave the default here. 13. Add another WEF to the cluster. 14. Type in the host name that is to be part of the new cluster. 15. Set the Priority to 2. 16. Click Next to complete the cluster setup. 17. Create an entry in the DNS for the new cluster. 18. Add the binding to the IIS site in the IIS Manager 19. Change the Alternate access mapping for you default site collection from http://sp2010wefone to http://team 20. Browse to http://Team , you will be redirected to the SharePoint site.

    Read the article

  • Is MongoDB a good choice or not for my application?

    - by shubham
    I have a Reporting application which stores the reports in xml format as recieved from source (XML schema is not defined, it can be any format) and those reports contain some keys and values. Like jobid, setid be keys for 1 type of report and userid, groupId for another type of report etc. The type of keys that can be referred from the document is determined by the namespaces used in the xml doc. These keys are stored on the basis of namespace used in the xml document. For e.g. If a tag in xml fragment uses namespace= "myspace1", then I have keys A and B for myspace1 stored in another table. It will fetch those keys from that table for this namespace, look for their values in xml doc and store it in another table along with the pointer to this xml document (Id of a record storing complete xml document in a cell). Use cases: When the user comes and queries for that key and value, I return the document or a set of documents that are having those key/value pairs. When the user comes and queries for a certain key and provide a name for xslt (pre stored), I fetch the set of documents fulfilling that criteria and convert that xml to html with the specified xslt. When the user comes and asks for a particular fragment of a doc then it can fetch a subset from a particular document also. When the user comes and queries for top x values of a certain key, I return the set of documents that are having top 10 values of that key. I am using DB2 database for its support of xml along with relational capabilities. That makes easier for me to run xpath expressions and fetch values of keys and also aggregate a set of documents fullfilling a criteria, all on the database side. Problems: DB2 stores XML doc of upto 2GB in size. Retrieval is very slow. If some thing involves many documents, then it takes significant time for things to show up in browser, and the user has to wait. Can MongoDb help in this case, as it is document oriented? can I do xml related xpath queries and document transformations on db side? Or is it ok to use both in such a case?

    Read the article

  • MicroTraining: Managing SSIS Connections–10 Apr 2012 at 10:00 AM EDT!

    - by andyleonard
    I am pleased to announce another free Linchpin People MicroTraining Event! On Tuesday, 10 Apr 2012 at 10:00 AM EDT, I will present Managing SSIS Connections . In this presentation, I will show you several means for managing SSIS connectivity using built-in functionality and a custom trick or two I picked up over the past few years. Want to learn more? It’s free (and no phone number required)! Register today. :{>...(read more)

    Read the article

  • Google Bot trying to access my web app's sitemap

    - by geekrutherford
    Interesting find today...   I was perusing the event log on our web server today for any unexpected ASP.NET exceptions/errors. Found the following:   Exception information: Exception type: HttpException Exception message: Path '/builder/builder.sitemap' is forbidden. Request information: Request URL: https://www.bondwave.com:443/builder/builder.sitemap Request path: /builder/builder.sitemap User host address: 66.249.71.247 User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE   At first I thought this was maybe an attempt by a hacker to mess with the sitemap. Using a handy web site (www.network-tools.com) I did a lookup on the IP address and found it was a Google bot trying to crawl the application. In this case, I would expect an exception or 403 since the site requires authentication anyway.

    Read the article

  • Windows Installer Error Codes 2738 and 2739

    - by Wil Peck
    I recently encountered this error on my Vista x64 box and came across a post that provided ended up providing the resolution. Link to information about MSI script-based custom action error codes 2738 and 2739 On my system I went to the C:\Windows\SysWOW64 directory and re-registered vbscript.dll and jscript.dll.  Once I did this my WIX project built and I no longer received the 4 ICE offenses (ICE08, ICE09, ICE32 and ICE61).   Technorati Tags: WIX,Windows Installer

    Read the article

  • SQL Server Central Webinar #17: Monitoring your SQL Server Instances

    Join SQL Server MVP Brad McGehee to learn why it is so important to monitor your SQL Server instances and what you should consider when starting out. This webinar will also show you how you can use Red Gate's SQL Monitor for SQL Server monitoring and will include an overview of the new features released in v3.0, including the ability to create your own custom metric definition and to support different access permissions.

    Read the article

  • ext4 hogs lot of unkown space compared to ext3

    - by rejith
    Ext4 FS has claimed 3% of partition space. Where has this gone and can I get it reclaimed? I have tried disabling Journals for the ext4 partition. Even this is not helping. Any other tricks I can try to get the space reclaimed other than reverting back to ext3? $ lsb_release -cr Release: 12.04 Codename: precise df -hP |grep media /dev/sda3 21G 430M 20G 3% /media/MAIL /dev/sda2 148G 188M 148G 1% /media/DATA => if I move this to ext4 its claiming 2.4G /dev/sda3 on /media/MAIL type ext4 (rw,nosuid,nodev,uhelper=udisks) /dev/sda2 on /media/DATA type ext3 (rw,nosuid,nodev,uhelper=udisks) $ sudo tune2fs -l /dev/sda3 |grep 'Reserved block count' Reserved block count: 0 $ sudo tune2fs -l /dev/sda2 |grep 'Reserved block count' Reserved block count: 0 NO hidden files or directories $ sudo du -ah *;pwd 16K lost+found /media/MAIL

    Read the article

  • Azure &ndash; Part 5 &ndash; Repository Pattern for Table Service

    - by Shaun
    In my last post I created a very simple WCF service with the user registration functionality. I created an entity for the user data and a DataContext class which provides some methods for operating the entities such as add, delete, etc. And in the service method I utilized it to add a new entity into the table service. But I didn’t have any validation before registering which is not acceptable in a real project. So in this post I would firstly add some validation before perform the data creation code and show how to use the LINQ for the table service.   LINQ to Table Service Since the table service utilizes ADO.NET Data Service to expose the data and the managed library of ADO.NET Data Service supports LINQ we can use it to deal with the data of the table service. Let me explain with my current example: I would like to ensure that when register a new user the email address should be unique. So I need to check the account entities in the table service before add. If you remembered, in my last post I mentioned that there’s a method in the TableServiceContext class – CreateQuery, which will create a IQueryable instance from a given type of entity. So here I would create a method under my AccountDataContext class to return the IQueryable<Account> which named Load. 1: public class AccountDataContext : TableServiceContext 2: { 3: private CloudStorageAccount _storageAccount; 4:  5: public AccountDataContext(CloudStorageAccount storageAccount) 6: : base(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials) 7: { 8: _storageAccount = storageAccount; 9:  10: var tableStorage = new CloudTableClient(_storageAccount.TableEndpoint.AbsoluteUri, 11: _storageAccount.Credentials); 12: tableStorage.CreateTableIfNotExist("Account"); 13: } 14:  15: public void Add(Account accountToAdd) 16: { 17: AddObject("Account", accountToAdd); 18: SaveChanges(); 19: } 20:  21: public IQueryable<Account> Load() 22: { 23: return CreateQuery<Account>("Account"); 24: } 25: } The method returns the IQueryable<Account> so that I can perform the LINQ operation on it. And back to my service class, I will use it to implement my validation. 1: public bool Register(string email, string password) 2: { 3: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 4: var accountToAdd = new Account(email, password) { DateCreated = DateTime.Now }; 5: var accountContext = new AccountDataContext(storageAccount); 6:  7: // validation 8: var accountNumber = accountContext.Load() 9: .Where(a => a.Email == accountToAdd.Email) 10: .Count(); 11: if (accountNumber > 0) 12: { 13: throw new ApplicationException(string.Format("Your account {0} had been used.", accountToAdd.Email)); 14: } 15:  16: // create entity 17: try 18: { 19: accountContext.Add(accountToAdd); 20: return true; 21: } 22: catch (Exception ex) 23: { 24: Trace.TraceInformation(ex.ToString()); 25: } 26: return false; 27: } I used the Load method to retrieve the IQueryable<Account> and use Where method to find the accounts those email address are the same as the one is being registered. If it has I through an exception back to the client side. Let’s run it and test from my simple client application. Oops! Looks like we encountered an unexpected exception. It said the “Count” is not support by the ADO.NET Data Service LINQ managed library. That is because the table storage managed library (aka. TableServiceContext) is based on the ADO.NET Data Service and it supports very limit LINQ operation. Although I didn’t find a full list or documentation about which LINQ methods it supports I could even refer a page on msdn here. It gives us a roughly summary of which query operation the ADO.NET Data Service managed library supports and which doesn't. As you see the Count method is not in the supported list. Not only the query operation, there inner lambda expression in the Where method are limited when using the ADO.NET Data Service managed library as well. For example if you added (a => !a.DateDeleted.HasValue) in the Where method to exclude those deleted account it will raised an exception said "Invalid Input". Based on my experience you should always use the simple comparison (such as ==, >, <=, etc.) on the simple members (such as string, integer, etc.) and do not use any shortcut methods (such as string.Compare, string.IsNullOrEmpty etc.). 1: // validation 2: var accountNumber = accountContext.Load() 3: .Where(a => a.Email == accountToAdd.Email) 4: .ToList() 5: .Count; 6: if (accountNumber > 0) 7: { 8: throw new ApplicationException(string.Format("Your account {0} had been used.", accountToAdd.Email)); 9: } We changed the a bit and try again. Since I had created an account with my mail address so this time it gave me an exception said that the email had been used, which is correct.   Repository Pattern for Table Service The AccountDataContext takes the responsibility to save and load the account entity but only for that specific entity. Is that possible to have a dynamic or generic DataContext class which can operate any kinds of entity in my system? Of course yes. Although there's no typical database in table service we can threat the entities as the records, similar with the data entities if we used OR Mapping. As we can use some patterns for ORM architecture here we should be able to adopt the one of them - Repository Pattern in this example. We know that the base class - TableServiceContext provide 4 methods for operating the table entities which are CreateQuery, AddObject, UpdateObject and DeleteObject. And we can create a relationship between the enmity class, the table container name and entity set name. So it's really simple to have a generic base class for any kinds of entities. Let's rename the AccountDataContext to DynamicDataContext and make the type of Account as a type parameter if it. 1: public class DynamicDataContext<T> : TableServiceContext where T : TableServiceEntity 2: { 3: private CloudStorageAccount _storageAccount; 4: private string _entitySetName; 5:  6: public DynamicDataContext(CloudStorageAccount storageAccount) 7: : base(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials) 8: { 9: _storageAccount = storageAccount; 10: _entitySetName = typeof(T).Name; 11:  12: var tableStorage = new CloudTableClient(_storageAccount.TableEndpoint.AbsoluteUri, 13: _storageAccount.Credentials); 14: tableStorage.CreateTableIfNotExist(_entitySetName); 15: } 16:  17: public void Add(T entityToAdd) 18: { 19: AddObject(_entitySetName, entityToAdd); 20: SaveChanges(); 21: } 22:  23: public void Update(T entityToUpdate) 24: { 25: UpdateObject(entityToUpdate); 26: SaveChanges(); 27: } 28:  29: public void Delete(T entityToDelete) 30: { 31: DeleteObject(entityToDelete); 32: SaveChanges(); 33: } 34:  35: public IQueryable<T> Load() 36: { 37: return CreateQuery<T>(_entitySetName); 38: } 39: } I saved the name of the entity type when constructed for performance matter. The table name, entity set name would be the same as the name of the entity class. The Load method returned a generic IQueryable instance which supports the lazy load feature. Then in my service class I changed the AccountDataContext to DynamicDataContext and that's all. 1: var accountContext = new DynamicDataContext<Account>(storageAccount); Run it again and register another account. The DynamicDataContext now can be used for any entities. For example, I would like the account has a list of notes which contains 3 custom properties: Account Email, Title and Content. We create the note entity class. 1: public class Note : TableServiceEntity 2: { 3: public string AccountEmail { get; set; } 4: public string Title { get; set; } 5: public string Content { get; set; } 6: public DateTime DateCreated { get; set; } 7: public DateTime? DateDeleted { get; set; } 8:  9: public Note() 10: : base() 11: { 12: } 13:  14: public Note(string email) 15: : base(email, string.Format("{0}_{1}", email, Guid.NewGuid().ToString())) 16: { 17: AccountEmail = email; 18: } 19: } And no need to tweak the DynamicDataContext we can directly go to the service class to implement the logic. Notice here I utilized two DynamicDataContext instances with the different type parameters: Note and Account. 1: public class NoteService : INoteService 2: { 3: public void Create(string email, string title, string content) 4: { 5: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 6: var accountContext = new DynamicDataContext<Account>(storageAccount); 7: var noteContext = new DynamicDataContext<Note>(storageAccount); 8:  9: // validate - email must be existed 10: var accounts = accountContext.Load() 11: .Where(a => a.Email == email) 12: .ToList() 13: .Count; 14: if (accounts <= 0) 15: throw new ApplicationException(string.Format("The account {0} does not exsit in the system please register and try again.", email)); 16:  17: // save the note 18: var noteToAdd = new Note(email) { Title = title, Content = content, DateCreated = DateTime.Now }; 19: noteContext.Add(noteToAdd); 20: } 21: } And updated our client application to test the service. I didn't implement any list service to show all notes but we can have a look on the local SQL database if we ran it at local development fabric.   Summary In this post I explained a bit about the limited LINQ support for the table service. And then I demonstrated about how to use the repository pattern in the table service data access layer and make the DataContext dynamically. The DynamicDataContext I created in this post is just a prototype. In fact we should create the relevant interface to make it testable and for better structure we'd better separate the DataContext classes for each individual kind of entity. So it should have IDataContextBase<T>, DataContextBase<T> and for each entity we would have class AccountDataContext<Account> : IDataContextBase<Account>, DataContextBase<Account> { … } class NoteDataContext<Note> : IDataContextBase<Note>, DataContextBase<Note> { … }   Besides the structured data saving and loading, another common scenario would be saving and loading some binary data such as images, files. In my next post I will show how to use the Blob Service to store the bindery data - make the account be able to upload their logo in my example.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • A Look at SQL Server 2008 Change Tracking

    Before SQL Server 2008, you had to build a custom solution if you wanted to keep track of the changes to the data in your tables. SQL Server 2008 has a new offering called Change Tracking that keeps track of each DML event type and the keys of the row that was affected.

    Read the article

  • Chart Chooser Helps You Pick the Right Chart for the Job

    - by Jason Fitzpatrick
    If you’re not sure what kind of chart would best showcase the data you’re presenting, Chart Chooser makes short work of narrowing it down. Are you trying to showcase trends? Compare the composition of sets? Show distributions and trends together? By selecting what you’re trying to highlight, Chart Chooser automatically narrows the pool of chart types to show which would effectively achieve your end. Once you’ve narrowed it down to the chart type you want, you can even download an Excel template for that chart type and populate it with your own data. Hit up the link below to take it for a spin and grab some free templates. Chart Chooser [via Flowing Data] How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • Gettings Terms asscoiated to a Specific list item

    - by Gino Abraham
    I had a fancy requirement where i had to get all tags associated to a document set in a document library. The normal tag could webpart was not working when i add it to the document set home page, so planned a custom webpart. Was checking in net to find a straight forward way to achieve this, but was not lucky enough to get something. Since i didnt get any samples in net, i looked into Microsoft.Sharerpoint.Portal.Webcontrols and found a solution.The socialdataframemanager control in 14Hive/Template/layouts/SocialDataFrame.aspx directed me to the solution. You can get the dll from ISAPI folder. Following Code snippet can get all Terms associated to the List Item given that you have list name and id for the list item. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint; using Microsoft.Office.Server.SocialData; namespace TagChecker { class Program { static void Main(string[] args) { // Your site url string siteUrl = http://contoso; // List Name string listName = "DocumentLibrary1"; // List Item Id for which you want to get all terms int listItemId = 35; using (SPSite site = new SPSite(siteUrl)) { using(SPWeb web = site.OpenWeb()) { SPListItem listItem = web.Lists[listName].GetItemById(listItemId); string url = string.Empty; // Based on the list type the url would be formed. Code Sniffed from Micosoft dlls :) if (listItem.ParentList.BaseType == SPBaseType.DocumentLibrary) { url = listItem.Web.Url.TrimEnd(new char[] { '/' }) + "/" + listItem.Url.TrimStart(new char[] { '/' }); } else if (SPFileSystemObjectType.Folder == listItem.FileSystemObjectType) { url = listItem.Web.Url.TrimEnd(new char[] { '/' }) + "/" + listItem.Folder.Url.TrimStart(new char[] { '/' }); } else { url = listItem.Web.Url.TrimEnd(new char[] { '/' }) + "/" + listItem.ParentList.Forms[PAGETYPE.PAGE_DISPLAYFORM].Url.TrimStart(new char[] { '/' }) + "?ID=" + listItem.ID.ToString(); } SPServiceContext serviceContext = SPServiceContext.GetContext(site); Uri uri = new Uri(url); SocialTagManager mgr = new SocialTagManager(serviceContext); SocialTerm[] terms = mgr.GetTerms(uri); foreach (SocialTerm term in terms) { Console.WriteLine(term.Term.Labels[0].Value ); } } } Console.Read(); } } } Reference dlls added are Microsoft.Sharepoint , Microsoft.Sharepoint.Taxonomy, Microsoft.office.server, Microsoft.Office.Server.UserProfiles from ISAPI folder. This logic can be used to make a custom tag cloud webpart by taking code from OOB tag cloud, so taht you can have you webpart anywhere in the site and still get Tags added to a specifc libdary/List. Hope this helps some one.

    Read the article

  • What is Rainbow (not the CMS)

    - by Jeremy Thompson
    I was reading this excellent blog article regarding speeding up the badge page and in the last comment the author @waffles (a.k.a Sam Saffron) mentions these tools: dapper and a bunch of custom helpers like rainbow, sql builder etc Dapper and sql builder was easy to look up but rainbow keeps pointing me to a CMS, can someone please point me to the real source? Thanks. Obviously the architecture of these [SE] sites is uber cool and ultra fast so no comments on that thanks.

    Read the article

  • Space invaders clone not moving properly

    - by ThePlan
    I'm trying to make a basic space invaders clone in allegro 5, I've got my game set up, basic events and such, here is the code: #include <allegro5/allegro.h> #include <allegro5/allegro_image.h> #include <allegro5/allegro_primitives.h> #include <allegro5/allegro_font.h> #include <allegro5/allegro_ttf.h> #include "Entity.h" // GLOBALS ========================================== const int width = 500; const int height = 500; const int imgsize = 3; bool key[5] = {false, false, false, false, false}; bool running = true; bool draw = true; // FUNCTIONS ======================================== void initSpaceship(Spaceship &ship); void moveSpaceshipRight(Spaceship &ship); void moveSpaceshipLeft(Spaceship &ship); void initInvader(Invader &invader); void moveInvaderRight(Invader &invader); void moveInvaderLeft(Invader &invader); void initBullet(Bullet &bullet); void fireBullet(); void doCollision(); void updateInvaders(); void drawText(); enum key_t { UP, DOWN, LEFT, RIGHT, SPACE }; enum source_t { INVADER, DEFENDER }; int main(void) { if(!al_init()) { return -1; } Spaceship ship; Invader invader; Bullet bullet; al_init_image_addon(); al_install_keyboard(); al_init_font_addon(); al_init_ttf_addon(); ALLEGRO_DISPLAY *display = al_create_display(width, height); ALLEGRO_EVENT_QUEUE *event_queue = al_create_event_queue(); ALLEGRO_TIMER *timer = al_create_timer(1.0 / 60); ALLEGRO_BITMAP *images[imgsize]; ALLEGRO_FONT *font1 = al_load_font("arial.ttf", 20, 0); al_register_event_source(event_queue, al_get_keyboard_event_source()); al_register_event_source(event_queue, al_get_display_event_source(display)); al_register_event_source(event_queue, al_get_timer_event_source(timer)); images[0] = al_load_bitmap("defender.bmp"); images[1] = al_load_bitmap("invader.bmp"); images[2] = al_load_bitmap("explosion.bmp"); al_convert_mask_to_alpha(images[0], al_map_rgb(0, 0, 0)); al_convert_mask_to_alpha(images[1], al_map_rgb(0, 0, 0)); al_convert_mask_to_alpha(images[2], al_map_rgb(0, 0, 0)); initSpaceship(ship); initBullet(bullet); initInvader(invader); al_start_timer(timer); while(running) { ALLEGRO_EVENT ev; al_wait_for_event(event_queue, &ev); if(ev.type == ALLEGRO_EVENT_TIMER) { draw = true; if(key[RIGHT] == true) moveSpaceshipRight(ship); if(key[LEFT] == true) moveSpaceshipLeft(ship); } else if(ev.type == ALLEGRO_EVENT_DISPLAY_CLOSE) running = false; else if(ev.type == ALLEGRO_EVENT_KEY_DOWN) { switch(ev.keyboard.keycode) { case ALLEGRO_KEY_ESCAPE: running = false; break; case ALLEGRO_KEY_LEFT: key[LEFT] = true; break; case ALLEGRO_KEY_RIGHT: key[RIGHT] = true; break; case ALLEGRO_KEY_SPACE: key[SPACE] = true; break; } } else if(ev.type == ALLEGRO_KEY_UP) { switch(ev.keyboard.keycode) { case ALLEGRO_KEY_LEFT: key[LEFT] = false; break; case ALLEGRO_KEY_RIGHT: key[RIGHT] = false; break; case ALLEGRO_KEY_SPACE: key[SPACE] = false; break; } } if(draw && al_is_event_queue_empty(event_queue)) { draw = false; al_draw_bitmap(images[0], ship.pos_x, ship.pos_y, 0); al_flip_display(); al_clear_to_color(al_map_rgb(0, 0, 0)); } } al_destroy_font(font1); al_destroy_event_queue(event_queue); al_destroy_timer(timer); for(int i = 0; i < imgsize; i++) al_destroy_bitmap(images[i]); al_destroy_display(display); } // FUNCTION LOGIC ====================================== void initSpaceship(Spaceship &ship) { ship.lives = 3; ship.speed = 2; ship.pos_x = width / 2; ship.pos_y = height - 20; } void initInvader(Invader &invader) { invader.health = 100; invader.count = 40; invader.speed = 0.5; invader.pos_x = 300; invader.pos_y = 300; } void initBullet(Bullet &bullet) { bullet.speed = 10; } void moveSpaceshipRight(Spaceship &ship) { ship.pos_x += ship.speed; if(ship.pos_x >= width) ship.pos_x = width-30; } void moveSpaceshipLeft(Spaceship &ship) { ship.pos_x -= ship.speed; if(ship.pos_x <= 0) ship.pos_x = 0+30; } However it's not behaving the way I want it to behave, in fact the behavior for the ship movement is un-normal. Basically I specified that the ship only moves when the right/left key is down, however the ship is moving constantly to the direction of the key pressed, it never stops although it should only move while my key is down. Even more weird behavior, when I press the opposite key the ship completely stops no matter what else I press. What's wrong with the code? Why does the ship move constantly even after I specified it only moves when a key is down?

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >