Search Results

Search found 27001 results on 1081 pages for 'michael palmeter (engineered systems product management)'.

Page 286/1081 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • A developer&rsquo;s WBS &ndash; 3 factors of 5

    - by johndoucette
    As a development manager, I have requested work breakdown structures (WBS) many times from the dev leads. Everyone has their own approach and why it takes sometimes days to get this simple list is often frustrating. Here is a simple way to get that elusive WBS done in 30 minutes and have 125 items in your list – well, 126. The WBS is made up of parent-child entities representing the overall outcome of the project. At the bottom of the hierarchical list should be the task item that a developer would perform in support of the branch in the list or WBS. Because I work with different dev leads on every project, I always ask the “what time value would you like to see at the lowest task in order to assign it to a developer and ensure it gets done within the timeframe”. I am particular to a task being 8 hours. Some like 8 to 24 hours. Stay away from tasks defaulting to 1 week. The task becomes way to vague and hard to manage completeness, especially on short budgets. As a developer, your focus is identifying the tasks you to accomplish in order to deliver the product. As a project manager, you will take the developer's WBS and add all the “other stuff” like quality testing, meetings, documentation, transition to maintenance, etc… Start your exercise with the name of the product you are delivering as a result of the project. You should be able to represent what you are building and deploying with one to three words. Example; XYZ Public Website Middleware BizTalk Application The reason you start with that single identifier is to always see the list as the product. It helps during each of the next three passes. Now, choose 5 tasks which in their entirety represent the product you will be delivering and add them to list under the product name you created earlier; Public Website     Security     Sites     Infrastructure     Publishing     Creative Continue this concept of seeing the list as the complete picture and decompose it one more level. You should have 25 items. Public Website     Security         Authentication         Login Control         Administration         DRM         Workflow     Sites         Masterpages         Page Layouts         Web Parts (RIA, Multimedia)         Content Types         Structures     Infrastructure         ...     Publishing         ...     Creative         ... And one more time for a total of 125 items. The top item makes the list 126. Public Website     Security         Authentication             Install (AD/ADAM/LDAP/SQL)             Configuration             Management             Web App Configuration             Implement Provider         Login Control             Login Form             Login/Logoff             pw change             pw recover/forgot             email verification         Administration             ...         DRM             ...         Workflow             ...     Sites         Masterpages         Page Layouts         Web Parts (RIA, Multimedia)         Content Types         Structures     Infrastructure         ...     Publishing         ...     Creative         ... The next step is to make sure the task at the bottom of every branch represents the “time value” you planned for the project. You can add more to the WBS and of course if you can’t find 5 items, 4 is fine. If a task can be done in a fraction of the time value you determined for the project, try to roll it up into a larger task. In the task actions (later when the iteration is being planned), decompose the details back to the simple tasks. Now, go estimate!

    Read the article

  • Java Mission Control for SE Embedded 8

    - by kshimizu-Oracle
    ????????????Java???·????????????Java Mission Control????Java SE 8 Embedded???????????Java????????????????Java Mission Control?????????JVM?Java????????? CPU?????????? ???????? ?????????? ???????UI???????????????? ????????????????????????????????????????????????????????????(Java Mission Control????????????????????????????????) 1. Java Mission Control??????? Java?????????????? JMX?????(MBean????) ? Java SE Embedded 8?Compact 3?Full JRE?????(???Minimal?VM??????) ????·???? ? Java SE Embedded 8?Full JRE??????(???Minimal?VM??????) ? ???????Java ME 8??????????????? 2. ???????JVM?????     2.1. JMX?????(MBeans???)????? >java -Dcom.sun.management.jmxremote=true               -Dcom.sun.management.jmxremote.port=7091                # ????????              -Dcom.sun.management.jmxremote.authenticate=false   # ????              -Dcom.sun.management.jmxremote.ssl=false                  # SSL??              -jar appliation.jar ? ??????????????????????JVM??????????????????? "-Djava.rmi.server.hostname=192.168.0.20"                     # ?????????IP????/???? ???????????(http://docs.oracle.com/javase/7/docs/technotes/guides/management/faq.html)?5???????????????????????     2.2. ????·????????? JVM????????????????????? "-XX:+UnlockCommercialFeatures -XX:+FlightRecorder" 3. Java Mission Control?????? JDK????????jmc??????????? >"JDK_HOME"/bin/jmc 4. Java Mission Control??JVM??????  Java Mission Control?????????????????????????????????????? - ????????????IP????·??????????????????JVM????????????????????? - ??????????(????·?????)?????????? - ??????????OK??? ????????????????????????????????????????????????????????????Java?????Java Mission Control???????? ??URL) http://www.oracle.com/technetwork/jp/java/javaseproducts/mission-control/index.html http://www.oracle.com/technetwork/jp/java/javaseproducts-old/mission-control/java-mission-control-wp-2008279-ja.pdf http://www.oracle.com/technetwork/java/embedded/resources/tech/java-flight-rec-on-java-se-emb-8-2158734.html

    Read the article

  • Should I manage authentication on my own if the alternative is very low in usability and I am already managing roles?

    - by rumtscho
    As a small in-house dev department, we only have experience with developing applications for our intranet. We use the existing Active Directory for user account management. It contains the accounts of all company employees and many (but not all) of the business partners we have a cooperation with. Now, the top management wants a technology exchange application, and I am the lead dev on the new project. Basically, it is a database containing our know-how, with a web frontend. Our employees, our cooperating business partners, and people who wish to become our cooperating business partners should have access to it and see what technologies we have, so they can trade for them with the department which owns them. The technologies are not patented, but very valuable to competitors, so the department bosses are paranoid about somebody unauthorized gaining access to their technology description. This constraint necessitates a nightmarishly complicated multi-dimensional RBAC-hybrid model. As the Active Directory doesn't even contain all the information needed to infer the roles I use, I will have to manage roles plus per-technology per-user granted access exceptions within my system. The current plan is to use Active Directory for authentication. This will result in a multi-hour registration process for our business partners where the database owner has to manually create logins in our Active Directory and send them credentials. If I manage the logins in my own system, we could improve the usability a lot, for example by letting people have an active (but unprivileged) account as soon as they register. It seems to me that, after I am having a users table in the DB anyway (and managing ugly details like storing historical user IDs so that recycled user IDs within the Active Directory don't unexpectedly get rights to view someone's technologies), the additional complexity from implementing authentication functionality will be minimal. Therefore, I am starting to lean towards doing my own user login management and forgetting the AD altogether. On the other hand, I see some reasons to stay with Active Directory. First, the conventional wisdom I have heard from experienced programmers is to not do your own user management if you can avoid it. Second, we have code I can reuse for connection to the active directory, while I would have to code the authentication if done in-system (and my boss has clearly stated that getting the project delivered on time has much higher priority than delivering a system with high usability). Third, I am not a very experienced developer (this is my first lead position) and have never done user management before, so I am afraid that I am overlooking some important reasons to use the AD, or that I am underestimating the amount of work left to do my own authentication. I would like to know if there are more reasons to go with the AD authentication mechanism. Specifically, if I want to do my own authentication, what would I have to implement besides a secure connection for the login screen (which I would need anyway even if I am only transporting the pw to the AD), lookup of a password hash and a mechanism for password recovery (which will probably include manual identity verification, so no need for complex mTAN-like solutions)? And, if you have experience with such security-critical systems, which one would you use and why?

    Read the article

  • Why Are We Here?

    - by Jonathan Mills
    Back in the early 2000s, Toyota had a vision of building the number one best selling minivan in North America. Their current minivan, the Sienna, was small, underpowered, and badly needed help.  Yuji Yokoya was given the job of re-engineering the Sienna. There was just one problem, Yuji, lived in Japan. He did not know the people or places that he would be engineering for. Believe it or not, Japan is nothing like North America. So, what does a chief engineer do in a situation like that? He packed up his team and flew halfway around the world. He made a commitment to drive through every state in the US, every province in Canada, and Mexico. He met the people and drove the roads that the Sienna would be driving. And guess what, what he learned on that trip revolutionized the Sienna. The innovations he made, sent the Sienna to number one. Why? Because he knew who he was building his product for. He knew, why he was there.Let me ask you this, do you know why you are building what you are building? As a member of a product team, can you tell me how your product will be used in the real world? As you are writing code, building test plans, writing stories, or any of the other project tasks, can you picture the face of a person who will be using what you are building? All to often, the answer to those questions is, no. Why is it important? Because, every day, project team members make assumptions. Over a given project, it is safe to say project team members will make thousands of assumptions about what they are doing. And all to often, those assumptions are not quite right. Its not that they are not good at their job, its just that they don’t really know why they are there.So, what to do? First and foremost, stop doing what you are doing. Yes, really. Schedule some time to go visit the people who will be using your product. Don’t invite them to you, go to them. Watch them work. Interact with them. Ask them questions. Maybe even try it out yourself. This serves two purposes. One, It shows them that you care about them. They will be far more engaged in your project if they feel like you care. And nothing says you care more that spending some time. Second, if gives you the proper frame of reference for you work. It gives you something tangible to go back to as you are building your product. As you make the thousands of assumptions that you will make over the life of your project, it gives you something to see in your mind that makes it real to you.Ultimately, setting a proper frame of reference is critical to the overall success of a project. The funny thing is, it really does not even take that long. In most cases, a 2-3 hour session will give you most of what you need to get the right insight. For the project, it will be the best 2 hours you could spend.

    Read the article

  • MySQL Server 5.6 defaults changes

    - by user12626240
    We're improving the MySQL Server defaults, as announced by Tomas Ulin at MySQL Connect. Here's what we're changing:  Setting  Old  New  Notes back_log  50  50 + ( max_connections / 5 ) capped at 900 binlog_checksum  off  CRC32  New variable in 5.6 binlog_row_event_max_size  1k  8k flush_time  1800  Windows changes from 1800 to 0  Was already 0 on other platforms host_cache_size  128  128 + 1 for each of the first 500 max_connections + 1 for every 20 max_connections over 500, capped at 2000  New variable in 5.6 innodb_autoextend_increment  8  64  Now affects *.ibd files. 64 is 64 megabytes innodb_buffer_pool_instances  0  8. On 32 bit Windows only, if innodb_buffer_pool_size is greater than 1300M, default is innodb_buffer_pool_size / 128M innodb_concurrency_tickets  500  5000 innodb_file_per_table  off  on innodb_log_file_size  5M  48M  InnoDB will always change size to match my.cnf value. Also see innodb_log_compressed_pages and binlog_row_image innodb_old_blocks_time 0  1000 1 second innodb_open_files  300  300; if innodb_file_per_table is ON, higher of table_open_cache or 300 innodb_purge_batch_size  20  300 innodb_purge_threads  0  1 innodb_stats_on_metadata  on  off join_buffer_size 128k  256k max_allowed_packet  1M  4M max_connect_errors  10  100 open_files_limit  0  5000  See note 1 query_cache_size  0  1M query_cache_type  on/1  off/0 sort_buffer_size  2M  256k sql_mode  none  NO_ENGINE_SUBSTITUTION  See later post about default my.cnf for STRICT_TRANS_TABLES sync_master_info  0  10000  Recommend: master_info_repository=table sync_relay_log  0  10000 sync_relay_log_info  0  10000  Recommend: relay_log_info_repository=table. Also see Replication Relay and Status Logs table_definition_cache  400  400 + table_open_cache / 2, capped at 2000 table_open_cache  400  2000   Also see table_open_cache_instances thread_cache_size  0  8 + max_connections/100, capped at 100 Note 1: In 5.5 there was already a rule to make open_files_limit 10 + max_connections + table_cache_size * 2 if that was higher than the user-specified value. Now uses the higher of that and (5000 or what you specify). We are also adding a new default my.cnf file and guided instructions on the key settings to adjust. More on this in a later post. We're also providing a page with suggestions for settings to improve backwards compatibility. The old example files like my-huge.cnf are obsolete. Some of the improvements are present from 5.6.6 and the rest are coming. These are ideas, and until they are in an official GA release, they are subject to change. As part of this work I reviewed every old server setting plus many hundreds of emails of feedback and testing results from inside and outside Oracle's MySQL Support team and the many excellent blog entries and comments from others over the years, including from many MySQL Gurus out there, like Baron, Sheeri, Ronald, Schlomi, Giuseppe and Mark Callaghan. With these changes we're trying to make it easier to set up the server by adjusting only a few settings that will cause others to be set. This happens only at server startup and only applies to variables where you haven't set a value. You'll see a similar approach used for the Performance Schema. The Gurus don't need this but for many newcomers the defaults will be very useful. Possibly the most unusual change is the way we vary the setting for innodb_buffer_pool_instances for 32-bit Windows. This is because we've found that DLLs with specified load addresses often fragment the limited four gigabyte 32-bit address space and make it impossible to allocate more than about 1300 megabytes of contiguous address space for the InnoDB buffer pool. The smaller requests for many pools are more likely to succeed. If you change the value of innodb_log_file_size in my.cnf you will see a message like this in the error log file at the next restart, instead of the old error message: [Warning] InnoDB: Resizing redo log from 2*64 to 5*128 pages, LSN=5735153 One of the biggest challenges for the defaults is the millions of installations on a huge range of systems, from point of sale terminals and routers though shared hosting or end user systems and on to major servers with lots of CPU cores, hundreds of gigabytes of RAM and terabytes of fast disk space. Our past defaults were for the smaller systems and these change that to larger shared hosting or shared end user systems, still with a bias towards the smaller end. There is a bias in favour of OLTP workloads, so reporting systems may need more changes. Where there is a conflict between the best settings for benchmarks and normal use, we've favoured production, not benchmarks. We're very interested in your feedback, comments and suggestions.

    Read the article

  • return the result of a query and the total number of rows in a single function

    - by csotelo
    This is a question as might be focused on working in the best way, if there are other alternatives or is the only way: Using Codeigniter ... I have the typical 2 functions of list records and show total number of records (using the page as an alternative). The problem is that they are rather large. Sample 2 functions in my model: count Rows: function get_all_count() { $this->db->select('u.id_user'); $this->db->from('user u'); if($this->session->userdata('detail') != '1') { $this->db->join('management m', 'm.id_user = u.id_user', 'inner'); $this->db->where('id_detail', $this->session->userdata('detail')); if($this->session->userdata('management') === '1') { $this->db->or_where('detail', 1); } else { $this->db->where("id_profile IN ( SELECT e2.id_profile FROM profile e, profile e2, profile_path p, profile_path p2 WHERE e.id_profile = " . $this->session->userdata('profile') . " AND p2.id_profile = e.id_profile AND p.path LIKE(CONCAT(p2.path,'%')) AND e2.id_profile = p.id_profile )", NULL, FALSE); $this->db->where('MD5(u.id_user) <>', $this->session->userdata('id_user')); } } $this->db->where('u.id_user <>', 1); $this->db->where('flag <>', 3); $query = $this->db->get(); return $query->num_rows(); } results per page function get_all($limit, $offset, $sort = '') { $this->db->select('u.id_user, user, email, flag'); $this->db->from('user u'); if($this->session->userdata('detail') != '1') { $this->db->join('management m', 'm.id_user = u.id_user', 'inner'); $this->db->where('id_detail', $this->session->userdata('detail')); if($this->session->userdata('management') === '1') { $this->db->or_where('detail', 1); } else { $this->db->where("id_profile IN ( SELECT e2.id_profile FROM profile e, profile e2, profile_path p, profile_path p2 WHERE e.id_profile = " . $this->session->userdata('profile') . " AND p2.id_profile = e.id_profile AND p.path LIKE(CONCAT(p2.path,'%')) AND e2.id_profile = p.id_profile )", NULL, FALSE); $this->db->where('MD5(u.id_user) <>', $this->session->userdata('id_user')); } } $this->db->where('u.id_user <>', 1); $this->db->where('flag <>', 3); if($sort) $this->db->order_by($sort); $this->db->limit($limit, $offset); $query = $this->db->get(); return $query->result(); } You see, I repeat the most of the functions, the difference is that only the number of fields and management pages. I wonder if there is any alternative to get as much results as the query in a single function. I have seen many tutorials, and all create 2 functions: one to count and another to show results ... Will there be more optimal?

    Read the article

  • Django 1.2 + South 0.7 + django-annoying's AutoOneToOneField leads to TypeError: 'LegacyConnection'

    - by konrad
    I'm using Django 1.2 trunk with South 0.7 and an AutoOneToOneField copied from django-annoying. South complained that the field does not have rules defined and the new version of South no longer has an automatic field type parser. So I read the South documentation and wrote the following definition (basically an exact copy of the OneToOneField rules): rules = [ ( (AutoOneToOneField), [], { "to": ["rel.to", {}], "to_field": ["rel.field_name", {"default_attr": "rel.to._meta.pk.name"}], "related_name": ["rel.related_name", {"default": None}], "db_index": ["db_index", {"default": True}], }, ) ] from south.modelsinspector import add_introspection_rules add_introspection_rules(rules, ["^myapp"]) Now South raises the following error when I do a schemamigration. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "django/core/management/base.py", line 196, in run_from_argv self.execute(*args, **options.__dict__) File "django/core/management/base.py", line 223, in execute output = self.handle(*args, **options) File "South-0.7-py2.6.egg/south/management/commands/schemamigration.py", line 92, in handle (k, v) for k, v in freezer.freeze_apps([migrations.app_label()]).items() File "South-0.7-py2.6.egg/south/creator/freezer.py", line 33, in freeze_apps model_defs[model_key(model)] = prep_for_freeze(model) File "South-0.7-py2.6.egg/south/creator/freezer.py", line 65, in prep_for_freeze fields = modelsinspector.get_model_fields(model, m2m=True) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 322, in get_model_fields args, kwargs = introspector(field) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 271, in introspector arg_defs, kwarg_defs = matching_details(field) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 187, in matching_details if any([isinstance(field, x) for x in classes]): TypeError: 'LegacyConnection' object is not iterable Is this related to a recent change in Django 1.2 trunk? How do I fix this? I use this field as follows: class Bar(models.Model): foo = AutoOneToOneField("foo.Foo", primary_key=True, related_name="bar") For reference the field code from django-tagging: class AutoSingleRelatedObjectDescriptor(SingleRelatedObjectDescriptor): def __get__(self, instance, instance_type=None): try: return super(AutoSingleRelatedObjectDescriptor, self).__get__(instance, instance_type) except self.related.model.DoesNotExist: obj = self.related.model(**{self.related.field.name: instance}) obj.save() return obj class AutoOneToOneField(OneToOneField): def contribute_to_related_class(self, cls, related): setattr(cls, related.get_accessor_name(), AutoSingleRelatedObjectDescriptor(related))

    Read the article

  • Hidden Features of C#?

    - by Serhat Özgel
    This came to my mind after I learned the following from this question: where T : struct We, C# developers, all know the basics of C#. I mean declarations, conditionals, loops, operators, etc. Some of us even mastered the stuff like Generics, anonymous types, lambdas, linq, ... But what are the most hidden features or tricks of C# that even C# fans, addicts, experts barely know? Here are the revealed features so far: Keywords yield by Michael Stum var by Michael Stum using() statement by kokos readonly by kokos as by Mike Stone as / is by Ed Swangren as / is (improved) by Rocketpants default by deathofrats global:: by pzycoman using() blocks by AlexCuse volatile by Jakub Šturc extern alias by Jakub Šturc Attributes DefaultValueAttribute by Michael Stum ObsoleteAttribute by DannySmurf DebuggerDisplayAttribute by Stu DebuggerBrowsable and DebuggerStepThrough by bdukes ThreadStaticAttribute by marxidad FlagsAttribute by Martin Clarke ConditionalAttribute by AndrewBurns Syntax ?? operator by kokos number flaggings by Nick Berardi where T:new by Lars Mæhlum implicit generics by Keith one-parameter lambdas by Keith auto properties by Keith namespace aliases by Keith verbatim string literals with @ by Patrick enum values by lfoust @variablenames by marxidad event operators by marxidad format string brackets by Portman property accessor accessibility modifiers by xanadont ternary operator (?:) by JasonS checked and unchecked operators by Binoj Antony implicit and explicit operators by Flory Language Features Nullable types by Brad Barker Currying by Brian Leahy anonymous types by Keith __makeref __reftype __refvalue by Judah Himango object initializers by lomaxx format strings by David in Dakota Extension Methods by marxidad partial methods by Jon Erickson preprocessor directives by John Asbeck DEBUG pre-processor directive by Robert Durgin operator overloading by SefBkn type inferrence by chakrit boolean operators taken to next level by Rob Gough pass value-type variable as interface without boxing by Roman Boiko programmatically determine declared variable type by Roman Boiko Static Constructors by Chris Easier-on-the-eyes / condensed ORM-mapping using LINQ by roosteronacid Visual Studio Features select block of text in editor by Himadri snippets by DannySmurf Framework TransactionScope by KiwiBastard DependantTransaction by KiwiBastard Nullable<T> by IainMH Mutex by Diago System.IO.Path by ageektrapped WeakReference by Juan Manuel Methods and Properties String.IsNullOrEmpty() method by KiwiBastard List.ForEach() method by KiwiBastard BeginInvoke(), EndInvoke() methods by Will Dean Nullable<T>.HasValue and Nullable<T>.Value properties by Rismo GetValueOrDefault method by John Sheehan Tips & Tricks nice method for event handlers by Andreas H.R. Nilsson uppercase comparisons by John access anonymous types without reflection by dp a quick way to lazily instantiate collection properties by Will JavaScript-like anonymous inline-functions by roosteronacid Other netmodules by kokos LINQBridge by Duncan Smart Parallel Extensions by Joel Coehoorn

    Read the article

  • Convert IEnumerable to EntitySet

    - by Gregorius
    Hey all, Hoping somebody can shed some light, and perhaps a possible solution to this issue I'm having... I have used LINQ to SQL to pull some data from a database into local entities. They are products from a shopping cart system. A product can contain a collection of KitGroups (which are stored in an EntitySet (System.Data.Linq.EntitySet). KitGroups contain collections of KitItems, and KitItems can contain Nested Products (which link back up to the original Product type - so its recursive). From these entities I'm building XML using LINQ to XML - all good here - my XML looks beautiful, calling a "GenerateProductElement" function, which calls itself recursively to generate the nested products. Wonderful stuff. However, here's where i'm stuck.. i'm now trying to deserialize that XML back to the original objects (all autogenerated by Linq to SQL)... and herein lies the problem. Linq tO Sql expects my collections to be EntitySet collections, however Linq to Xml (which i'm tyring to use to deserailise) is returning IEnumerable. I've experimented with a few ways of casting between the 2, but nothing seems to work... I'm starting to think that I should just deserialise manually (with some funky loops and conditionals to determine which KitGroup KitItems belong to, etc)... however its really quite tricky and that code is likely to be quite ugly, so I'd love to find a more elegant solution to this problem. Any suggestions? Here's a code snippet: private Product GenerateProductFromXML(XDocument inDoc) { var prod = from p in inDoc.Descendants("Product") select new Product { ProductID = (int)p.Attribute("ID"), ProductGUID = (Guid)p.Attribute("GUID"), Name = (string)p.Element("Name"), Summary = (string)p.Element("Summary"), Description = (string)p.Element("Description"), SEName = (string)p.Element("SEName"), SETitle = (string)p.Element("SETitle"), XmlPackage = (string)p.Element("XmlPackage"), IsAKit = (byte)(int)p.Element("IsAKit"), ExtensionData = (string)p.Element("ExtensionData"), }; //TODO: UUGGGGGGG Converting b/w IEnumerable & EntitySet var kitGroups = (from kg in inDoc.Descendants("KitGroups").Elements("KitGroup") select new KitGroup { KitGroupID = (int) kg.Attribute("ID"), KitGroupGUID = (Guid) kg.Attribute("GUID"), Name = (string) kg.Element("Name"), KitItems = // THIS IS WHERE IT FAILS - "Cannot convert source type IEnumerable to target type EntitySet..." (from ki in kg.Descendants("KitItems").Elements("KitItem") select new KitItem { KitItemID = (int) ki.Attribute("ID"), KitItemGUID = (Guid) ki.Attribute("GUID") }); }); Product ImportedProduct = prod.First(); ImportedProduct.KitGroups = new EntitySet<KitGroup>(); ImportedProduct.KitGroups.AddRange(kitGroups); return ImportedProduct; }

    Read the article

  • Cant insert row with auto-increment key via FluentNhibernate

    - by Jeff Shattock
    I'm getting started with Fluent NHibernate, and NHibernate in general. I'm trying to do something that I feel is pretty basic, but I cant quite get it to work. I'm trying to add a new entry to a simple table. Here's the Entity class. public class Product { public Product() { id = 0; } public virtual int id {get; set;} public virtual string description { get; set; } } Here's its mapping. public class ProductMap : ClassMap<Product> { public ProductMap() { Id(p => p.id).GeneratedBy.Identity().UnsavedValue(0); Map(p => p.description); } } I've tried that with and without the additional calls after Id(). And the insert code: var p = new Product() { description = "Apples" }; using (var s = _sf.CreateSession()) { s.Save(new_product); s.Flush(); } where _sf is a properly configured SessionSource. When I execute this code, I get: NHibernate.AssertionFailure : null identifier, which makes sense based on the SQL that NHibernate is executing: INSERT INTO "Product" (description) VALUES (@p0);@p0 = 'Apples' It doesnt seem to be trying to set the Id field, which seems ok (on its face) since the DB should generate that. But its not, I think. The DB schema is autogenerated by FNH: var config = Fluently.Configure().Database(MsSqlCeConfiguration.Standard.ShowSql().ConnectionString(@"Data Source=Database1.sdf")); var SessionSource = new SessionSource(config.BuildConfiguration().Properties, new ModelMappings()); var Session = SessionSource.CreateSession(); SessionSource.BuildSchema(Session); CreateInitialData(Session); Session.Flush(); Session.Clear(); I'm sure to be doing tons of things wrong, but whats the one thats causing this error?

    Read the article

  • LinqToSql - Parallel - DataContext and Parallel

    - by Gregoire
    In .NET 4 and multicore environment, does the linq to sql datacontext object take advantage of the new parallels if we use DataLoadOptions.LoadWith? EDIT I know linq to sql does not parallelize ordinary queries. What I want to know is when we specify DataLoadOption.LoadWith, does it use parallelization to perform the match between each entity and its sub entities? Example: using(MyDataContext context = new MyDataContext()) { DataLaodOptions options =new DataLoadOptions(); options.LoadWith<Product>(p=>p.Category); return this.DataContext.Products.Where(p=>p.SomeCondition); } generates the following sql: Select Id,Name from Categories Select Id,Name, CategoryId from Products where p.SomeCondition when all the products are created, will we have a categories.ToArray(); Parallel.Foreach(products, p => { p.Category == categories.FirstOrDefault(c => c.Id == p.CategoryId); }); or categories.ToArray(); foreach(Product product in products) { product.Category = categories.FirstOrDefault(c => c.Id == product.CategoryId); } ?

    Read the article

  • Repository Pattern and Entity Framework.

    - by vitorcast
    Hi people, I want to make an implementation with repository pattern with ASP.NET MVC 2 and Entity Framework but I have had some issues in the process. First of all, I have 2 entities that has a relationship between them, like Order and Product. When I generate my dbml file it gaves me a class Order with a property that map a "ProductSet" and one class Product with a property that map wich Order that Product relates itself. So I create my Repository pattern like IReporitory with the basic CRUD operations and inside my controllers I implement the ProductRepository or OrderRepository. The problem occurs when I try to create Product and have to assign my Order on it, like ProductOne.Order = _orderRepository.Find(orderId); That operation gave me some strange behavior and I can't find out what is wrong with it. Thanks for the help.

    Read the article

  • Google Map API v3 — set bounds and center

    - by Michael Bradley
    Hi, I've recently switched to Google Maps API V3. I'm working of a simple example which plots markers from an array, however I do not know how to center and zoom automatically with respect to the markers. I've searched the net high and low, including Google's own documentation, but have not found a clear answer. I know I could simply take an average of the co-ordinates, but how would I set the zoom accordingly? Could somebody please point me in the right direction? Perhaps you know of a good tutorial. Many thanks in advance, Michael function initialize() { var myOptions = { zoom: 10, center: new google.maps.LatLng(-33.9, 151.2), mapTypeId: google.maps.MapTypeId.ROADMAP } var map = new google.maps.Map(document.getElementById("map_canvas"),myOptions); setMarkers(map, beaches); } var beaches = [ ['Bondi Beach', -33.890542, 151.274856, 4], ['Coogee Beach', -33.423036, 151.259052, 5], ['Cronulla Beach', -34.028249, 121.157507, 3], ['Manly Beach', -33.80010128657071, 151.28747820854187, 2], ['Maroubra Beach', -33.450198, 151.259302, 1] ]; function setMarkers(map, locations) { var image = new google.maps.MarkerImage('images/beachflag.png', new google.maps.Size(20, 32), new google.maps.Point(0,0), new google.maps.Point(0, 32)); var shadow = new google.maps.MarkerImage('images/beachflag_shadow.png', new google.maps.Size(37, 32), new google.maps.Point(0,0), new google.maps.Point(0, 32)); var lat = map.getCenter().lat(); var lng = map.getCenter().lng(); var shape = { coord: [1, 1, 1, 20, 18, 20, 18 , 1], type: 'poly' }; for (var i = 0; i < locations.length; i++) { var beach = locations[i]; var myLatLng = new google.maps.LatLng(beach[1], beach[2]); var marker = new google.maps.Marker({ position: myLatLng, map: map, shadow: shadow, icon: image, shape: shape, title: beach[0], zIndex: beach[3] }); } }

    Read the article

  • Nhibernate Criteria Query with Join

    - by John Peters
    I am looking to do the following using an NHibernate Criteria Query I have "Product"s which has 0 to Many "Media"s A product can be associated with 1 to Many ProductCategories These use a table in the middled to create the join ProductCategories Id Title ProductsProductCategories ProductCategoryId ProductId Products Id Title ProductMedias ProductId MediaId Medias Id MediaType I need to implement a criteria query to return All Products in a ProductCategory and the top 1 associated Media or no media if none exists. So although for example a "T Shirt" may have 10 Medias associated, my result should be something similar to this Product.Id Product.Title MediaId 1 T Shirt 21 2 Shoes Null 3 Hat 43 I have tried the following solutions using JoinType.LeftOuterJoin 1) productCriteria.SetResultTransformer(Transformers.DistinctRootEntity); This hasnt worked as the transform is done code side and as I have .SetFirstResult() and .SetMaxResults() for paging purposes it wont work. 2) .SetProjection( Projections.Distinct( Projections.ProjectionList() .Add(Projections.Alias(Projections.Property("Id"), "Id")) ... .SetResultTransformer(Transformers.AliasToBean()); This hasn't worked as I cannot seem to populate a value for Medias.Id in the projections. (Similar to http://stackoverflow.com/questions/1036116/nhibernate-criteria-api-projections) Any help would be greatly appreciated

    Read the article

  • Doctrine 2 Cannot find entites

    - by Flyn San
    I'm using Kohana 3 and have a /doctrine/Entites folder with my entities inside. When executing the code $product = Doctrine::em()->find('Entities\Product', 1); in my controller, I get the error class_parents(): Class Entities\Product does not exist and could not be loaded Below is the Controller (classes/controller/welcome.php): <?php class Controller_Welcome extends Controller { public function action_index() { $prod = Doctrine::em()->find('Entities\Product', 1); } } Below is the Entity (/doctrine/Entities/Product.php): <?php /** * @Entity * @Table{name="products"} */ class Product { /** @Id @Column{type="integer"} */ private $id; /** @Column(type="string", length="255") */ private $name; public function getId() { return $this->id; } public function setId($id) { $this->id = intval($id); } public function getName() { return $this->name; } public function setName($name) { $this->name = $name; } } Below is the Doctrine module bootstrap file (/modules/doctrine/init.php): class Doctrine { private static $_instance = null; private $_application_mode = 'development'; private $_em = null; public static function em() { if ( self::$_instance === null ) self::$_instance = new Doctrine(); return self::$_instance->_em; } public function __construct() { require __DIR__.'/classes/doctrine/Doctrine/Common/ClassLoader.php'; $classLoader = new \Doctrine\Common\ClassLoader('Doctrine', __DIR__.'/classes/doctrine'); $classLoader->register(); $classLoader = new \Doctrine\Common\ClassLoader('Symfony', __DIR__.'/classes/doctrine/Doctrine'); $classLoader->register(); $classLoader = new \Doctrine\Common\ClassLoader('Entities', APPPATH.'doctrine'); $classLoader->register(); //Set up caching method $cache = $this->_application_mode == 'development' ? new \Doctrine\Common\Cache\ArrayCache : new \Doctrine\Common\Cache\ApcCache; $config = new Configuration; $config->setMetadataCacheImpl( $cache ); $driver = $config->newDefaultAnnotationDriver( APPPATH.'doctrine/Entities' ); $config->setMetadataDriverImpl( $driver ); $config->setQueryCacheImpl( $cache ); $config->setProxyDir( APPPATH.'doctrine/Proxies' ); $config->setProxyNamespace('Proxies'); $config->setAutoGenerateProxyClasses( $this->_application_mode == 'development' ); $dbconf = Kohana::config('database'); $dbconf = reset($dbconf); //Use the first database specified in the config $this->_em = EntityManager::create(array( 'dbname' => $dbconf['connection']['database'], 'user' => $dbconf['connection']['username'], 'password' => $dbconf['connection']['password'], 'host' => $dbconf['connection']['hostname'], 'driver' => 'pdo_mysql', ), $config); } } Any ideas what I've done wrong?

    Read the article

  • django manage.py syncdb not working?

    - by Diego
    Trying to learn Django, I closed the shell and am getting this problem now when I call python manage.py syncdb, any idea what happened?: I've already set up a db. I have manage.py set up in the folder django_bookmarks. What's up here? Traceback (most recent call last): File "manage.py", line 2, in from django.core.management import execute_manager ImportError: No module named django.core.management my-computer:~/Django-1.1.1/django_bookmarks mycomp$ export PATH=/Users/mycomp/bin:$PATH my-computer:~/Django-1.1.1/django_bookmarks mycomp$ python manage.py syncdb Traceback (most recent call last): File "manage.py", line 2, in from django.core.management import execute_manager ImportError: No module named django.core.management my-computer:~/Django-1.1.1/django_bookmarks mycomp$

    Read the article

  • Private constructor and public parameter constructor -C#

    - by Amutha
    I heard that private constructor prevent object creation from outside world. When i have a code public class Product { public string Name { get;set;} public double Price {get;set;} Product() { } public Product(string _name,double _price) { } } here still i can declare public constructor(parameter),won't it spoil the purpose of private constructor? When do we need both private and public constructor(parameter) in code? I need detailed explanation please.

    Read the article

  • SMO restore of SQL database doesn't overwrite

    - by Tom H.
    I'm trying to restore a database from a backup file using SMO. If the database does not already exist then it works fine. However, if the database already exists then I get no errors, but the database is not overwritten. The "restore" process still takes just as long, so it looks like it's working and doing a restore, but in the end the database has not changed. I'm doing this in Powershell using SMO. The code is a bit long, but I've included it below. You'll notice that I do set $restore.ReplaceDatabase = $true. Also, I use a try-catch block and report on any errors (I hope), but none are returned. Any obvious mistakes? Is it possible that I'm not reporting some error and it's being hidden from me? Thanks for any help or advice that you can give! function Invoke-SqlRestore { param( [string]$backup_file_name, [string]$server_name, [string]$database_name, [switch]$norecovery=$false ) # Get a new connection to the server [Microsoft.SqlServer.Management.Smo.Server]$server = New-SMOconnection -server_name $server_name Write-Host "Starting restore to $database_name on $server_name." Try { $backup_device = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($backup_file_name, "File") # Get local paths to the Database and Log file locations If ($server.Settings.DefaultFile.Length -eq 0) {$database_path = $server.Information.MasterDBPath } Else { $database_path = $server.Settings.DefaultFile} If ($server.Settings.DefaultLog.Length -eq 0 ) {$database_log_path = $server.Information.MasterDBLogPath } Else { $database_log_path = $server.Settings.DefaultLog} # Load up the Restore object settings $restore = New-Object Microsoft.SqlServer.Management.Smo.Restore $restore.Action = 'Database' $restore.Database = $database_name $restore.ReplaceDatabase = $true if ($norecovery.IsPresent) { $restore.NoRecovery = $true } Else { $restore.Norecovery = $false } $restore.Devices.Add($backup_device) # Get information from the backup file $restore_details = $restore.ReadBackupHeader($server) $data_files = $restore.ReadFileList($server) # Restore all backup files ForEach ($data_row in $data_files) { $logical_name = $data_row.LogicalName $physical_name = Get-FileName -path $data_row.PhysicalName $restore_data = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile") $restore_data.LogicalFileName = $logical_name if ($data_row.Type -eq "D") { # Restore Data file $restore_data.PhysicalFileName = $database_path + "\" + $physical_name } Else { # Restore Log file $restore_data.PhysicalFileName = $database_log_path + "\" + $physical_name } [Void]$restore.RelocateFiles.Add($restore_data) } $restore.SqlRestore($server) # If there are two files, assume the next is a Log if ($restore_details.Rows.Count -gt 1) { $restore.Action = [Microsoft.SqlServer.Management.Smo.RestoreActionType]::Log $restore.FileNumber = 2 $restore.SqlRestore($server) } } Catch { $ex = $_.Exception Write-Output $ex.message $ex = $ex.InnerException while ($ex.InnerException) { Write-Output $ex.InnerException.message $ex = $ex.InnerException } Throw $ex } Finally { $server.ConnectionContext.Disconnect() } Write-Host "Restore ended without any errors." }

    Read the article

  • WCF and Unity - Dependecy Injection

    - by Michael
    I'm trying to hock up WCF with dependecy injection. All the examples that I have found is based on the assumptions that you either uses a .svc (ServiceHostFactory) service or uses app.config to configure the container. Other examples is also based on that the container is passed around to the classes. I would like a solution where the container is not passed around (not tightly coupled to Unity). Where I don't uses a config file to configure the container and where I use self-hosted services. The problem is - as I see it - that the ServiceHost is taking the type of the service implementation as a parameter so what different does it do to use the InstanceProvider? The solution I have come up with at the moment is to register the ServiceHost (or a specialization) an register a Type with a name ( e.g. container.RegisterInstance<Type>("ServiceName", typeof(Service);). And then container.RegisterType<UnityServiceHost>(new InjectionConstructor(new ResolvedParameter<Type>("ServiceName"))); to register the ServiceHost. Any better solutions out there? I'm I perhaps way of in my assumptions. Best regards, Michael

    Read the article

  • Datamining on a mysql database

    - by sliptix
    Hello, I Begin with textmining. I have two database tables with thousands of data.. a table for "skills" and a table for "skills categories" every "skill" belongs to a skills categorie. a "skill" is , physicaly, a varchar(200) field in the database, where there is some text describing the skill. Here are some skills extracted from the skills table: "PHP (good level), Java (intermediaite), C++" "PHP5" "project management and quality management" "begining Javascript" "water engineering" "dfsdf zerze rzer" "cibling customers" what i want to do is to extract knowledge from those fields, i mean extract only the real skill and ignore the rest of useless text. for the above example i want to get only an array with: "PHP" "Java" "C++" "PHP5" "project management" "quality management" "Javascript" "water engineering" "cibling customers" what should i do to extract the skills from tons of data please ? do you know specific algorithms to do this ? ex : k-means ... ? Thanks in advance.

    Read the article

  • Migration for creating and deleting model in South

    - by Almad
    I've created a model and created initial migration for it: db.create_table('tvguide_tvguide', ( ('id', models.AutoField(primary_key=True)), ('date', models.DateField(_('Date'), auto_now=True, db_index=True)), )) db.send_create_signal('tvguide', ['TVGuide']) models = { 'tvguide.tvguide': { 'channels': ('models.ManyToManyField', ["orm['tvguide.Channel']"], {'through': "'ChannelInTVGuide'"}), 'date': ('models.DateField', ["_('Date')"], {'auto_now': 'True', 'db_index': 'True'}), 'id': ('models.AutoField', [], {'primary_key': 'True'}) } } complete_apps = ['tvguide'] Now, I'd like to drop it: db.drop_table('tvguide_tvguide') However, I have also deleted corresponding model. South (at least 0.6.2) is however trying to access it: (venv)[almad@eva-03 project]$ ./manage.py migrate tvguide Running migrations for tvguide: - Migrating forwards to 0002_removemodels. > tvguide: 0001_initial Traceback (most recent call last): File "./manage.py", line 27, in <module> execute_from_command_line() File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line utility.execute() File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 222, in execute output = self.handle(*args, **options) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/management/commands/migrate.py", line 91, in handle skip = skip, File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/migration.py", line 581, in migrate_app result = run_forwards(mapp, [mname], fake=fake, db_dry_run=db_dry_run, verbosity=verbosity) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/migration.py", line 388, in run_forwards verbosity = verbosity, File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/migration.py", line 287, in run_migrations orm = klass.orm File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/orm.py", line 62, in __get__ self.orm = FakeORM(*self._args) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/orm.py", line 45, in FakeORM _orm_cache[args] = _FakeORM(*args) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/orm.py", line 106, in __init__ self.models[name] = self.make_model(app_name, model_name, data) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/orm.py", line 307, in make_model tuple(map(ask_for_it_by_name, bases)), File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/utils.py", line 23, in ask_for_it_by_name ask_for_it_by_name.cache[name] = _ask_for_it_by_name(name) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/utils.py", line 17, in _ask_for_it_by_name return getattr(module, bits[-1]) AttributeError: 'module' object has no attribute 'TVGuide' Is there a way around?

    Read the article

  • Django: Using 2 different AdminSite instances with different models registered

    - by omat
    Apart from the usual admin, I want to create a limited admin for non-staff users. This admin site will have different registered ModelAdmins. I created a folder /useradmin/ in my project directory and similar to contrib/admin/_init_.py I added an autodiscover() which will register models defined in useradmin.py modules instead of admin.py: # useradmin/__init__.py def autodiscover(): # Same as admin.autodiscover() but registers useradmin.py modules ... for app in settings.INSTALLED_APPS: mod = import_module(app) try: before_import_registry = copy.copy(site._registry) import_module('%s.useradmin' % app) except: site._registry = before_import_registry if module_has_submodule(mod, 'useradmin'): raise I also cretated sites.py under useradmin/ to override AdminSite similar to contrib/admin/sites: # useradmin/sites.py class UserAdminSite(AdminSite): def has_permission(self, request): # Don't care if the user is staff return request.user.is_active def login(self, request): # Do the login stuff but don't care if the user is staff if request.user.is_authenticated(): ... else: ... site = UserAdminSite(name='useradmin') In the project's URLs: # urls.py from django.contrib import admin import useradmin admin.autodiscover() useradmin.autodiscover() urlpatterns = patterns('', (r'^admin/', include(admin.site.urls)), (r'^useradmin/', include(useradmin.site.urls)), ) And I try to register different models in admin.py and useradmin.py modules under app directories: # products/useradmin.py import useradmin class ProductAdmin(useradmin.ModelAdmin): pass useradmin.site.register(Product, ProductAdmin) But when registering models in useradmin.py like useradmin.site.register(Product, ProductAdmin), I get 'module' object has no attribute 'ModelAdmin' exception. Though when I try this via shell; import useradmin from useradmin import ModelAdmin does not raise any exception. Any ideas what might be wrong? Edit: I tried going the @Luke way and arranged the code as follows as minimal as possible: (file paths are relative to the project root) # admin.py from django.contrib.admin import autodiscover from django.contrib.admin.sites import AdminSite user_site = AdminSite(name='useradmin') # urls.py (does not even have url patterns; just calls autodiscover()) import admin admin.autodiscover() # products/admin.py import admin from products.models import Product admin.user_site.register(Product) As a result I get an AttributeError: 'module' object has no attribute 'user_site' when admin.user_site.register(Product) in products/admin.py is called. Any ideas? Solution: I don't know if there are better ways but, renaming the admin.py in the project root to useradmin.py and updating the imports accordingly resolved the last case, which was a naming and import conflict.

    Read the article

  • sharp architecture mapping error

    - by fez
    this is the error when i load product entity my code is Configuration cfg = new NHibernate.Cfg.Configuration(); ISessionFactory sessions; public MedicineController() //Construtor { cfg.Configure(); sessions = cfg.BuildSessionFactory(); } using (var session = sessions.OpenSession()) { var pGet = session.Get<Product>(0); } The Error is Unable to locate persister for the entity named 'SharpArchitecture.Domain.Product'. The persister define the persistence strategy for an entity. Possible causes: - The mapping for 'SharpArchitecture.Domain.Product' was not added to the NHibernate configuration. Thanks in advance

    Read the article

  • Low latency programming

    - by Sambatyon
    I've been reading a lot about low latency financial systems (especially since the famous case of corporate espionage) and the idea of low latency systems has been in my mind ever since. There are a million applications that can use what these guys are doing, so I would like to learn more about the topic. The thing is I cannot find anything valuable about the topic. Can anybody recommend books, sites, examples on low latency systems?

    Read the article

  • How to group a complex list of objects using LINQ?

    - by Daoming Yang
    I want to select and group the products, and rank them by the number of times they occur. For example, I have an OrderList each of order object has a OrderProductVariantList(OrderLineList), and each of OrderProductVariant object has ProductVariant, and then the ProductVariant object will have a Product object which contains product information. A friend helped me with the following code. It could be compiled, but it did not return any value/result. I used the watch window for the query and it gave me "The name 'query' does not exist in the current context". Can anyone help me? Many thanks. var query = orderList.SelectMany( o => o.OrderLineList ) // results in IEnumerable<OrderProductVariant> .Select( opv => opv.ProductVariant ) .Select( pv => p.Product ) .GroupBy( p => p ) .Select( g => new { Product = g.Key, Count = g.Count() });

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >