Search Results

Search found 14816 results on 593 pages for 'logical model'.

Page 426/593 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • Core Data performance deleteObject and save managed object context

    - by Gary
    I am trying to figure out the best way to bulk delete objects inside of my Core Data database. I have some objects with a parent/child relationship. At times I need to "refresh" the parent object by clearing out all of the existing children objects and adding new ones to Core Data. The 'delete all' portion of this operation is where I am running into trouble. I accomplish this by looping through the children and calling deleteObject for each one. I have noticed that after the NSManagedObjectContext:Save call following all of the deleteObject calls is very slow when I am deleting 15,000 objects. How can I speed up this call? Are there things happening during the save operation that I can be aware of and avoid by setting parameters different or setting up my model another way? I've noticed that memory spikes during this operation as well. I really just want to "delete * from". Thanks.

    Read the article

  • how to assign an object to smarty templates?

    - by keisimone
    i created a model object in PHP class User { public $title; public function changeTitle($newTitle){ $this->title = $newTitle; } } How do i expose the property of a User object in smarty just by assigning the object? I know i can do this $smarty->assign('title', $user->title); but my object has something like over 20 plus properties. Please advise. EDIT 1 the following didn't work for me. $smarty->assign('user', $user); OR $smarty->register_object('user', $user); then i try to {$user->title} nothing came out. Thank you.

    Read the article

  • Can I disable DataAnnotations validation on DefaultModelBinder?

    - by Max Toro
    I want DefaultModelBinder not to perform any validation based on DataAnnotations metadata. I'm already using DataAnnotations with DynamicData for the admin area of my site, and I need a different set of validation rules for the MVC based front-end. I'm decorating my classes with the MetadataType attribute. If I could have different MetadataType classes for the same model but used on different scenarios that would be great. If not I'm fine with just disabling the validation on the DefaultModelBinder, either by setting some property or by creating a specialized version of it.

    Read the article

  • JQuery Pure Template

    - by cem
    I cant figure out whats wrong. Its working when i tried to refresh only topics but it doesnt works when tried to refresh topics and page-links. ie. topics table's refreshing, and 'pagelinks' disappearing, i thought pure cannot reach - read second template node. By the way, i tested their code, first message box show up all of nodes - includes 'pagelinks' node, but second one - in function only show up topic rows. Its look like a bug. Anyone knows how can i solve this? ps. I'm using latest version of pure. Thanks. Test Code - pure.js line: 189 function dataselectfn(sel) { // ... m = sel.split('.'); alert(m.toSource()); return function (ctxt) { var data = ctxt.context; if (!data) { return ''; } alert('in function: ' + m.toSource()); // ... Json: {"topics":[{"name":"foo"}],"pagelinks":[{"Page":1},{"Page":2}]} HTML - before pure rendering: <table> <tbody> <tr> <td class="pagelinks"> <a page="1" href="/Topics/IndexForAreas?page=1" class="p Page@page">1</a> </td> <td class="pagelinks"> <a page="2" href="/Topics/IndexForAreas?page=2" class="p Page@page">2</a> </td> </tr> </tbody> </table> HTML - after pure rendering: <table> <tbody> <tr> </tr> </tbody> </table> Controller: [Transaction] public ActionResult IndexForAreas(int? page) { TopicService topicService = new TopicService(); PagedList<Topic> topics = topicService.GetPaged(page); if (Request.IsAjaxRequest()) { return Json(new { topics = topics.Select(t => new { name = t.Name, }), pagelinks = PagingHelper.AsPager(topics, 1) }); } return View(topics); } ASP.NET - View: <div class="topiccontainer"> <table> <% foreach (Topic topic in ViewData.Model) { %> <tr class="topics"> <td> <%= Html.ActionLink<ForumPostsController>(ec => ec.Index(topic.Name, null), topic.Name, new { @class="name viewlink@href" })%> </td> //bla bla... </tr> <%} %> </table> <table> <tr> <% Html.Pager(Model, 1, p => { %> <td class="pagelinks"> <%= Html.ActionLink<TopicsController>(c => c.IndexForAreas(p.Page), p.Page.ToString(), new { page = p.Page, @class = "Page@page" })%> </td> <% }); %> </tr> </table> </div> Master Page: <% Html.RenderAction("IndexForAreas", "Topics", new { area = "" }); %> <script type="text/javascript"> $.post("<%= Html.BuildUrlFromExpressionForAreas<TopicsController>(c => c.IndexForAreas(null)) %>", { page: page }, function (data) { $(".topiccontainer").autoRender(data); }, "json" ); </script>

    Read the article

  • Linux Software RAID recovery

    - by Zoredache
    I am seeing a discrepancy between the output of mdadm --detail and mdadm --examine, and I don't understand why. This output mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Array Size : 3662760640 (3493.08 GiB 3750.67 GB) Used Dev Size : 1465104256 (1397.23 GiB 1500.27 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 Persistence : Superblock is persistent Seems to contradict this. (the same for every disk in the array) mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 0.90.00 UUID : 1f54d708:60227dd6:163c2a05:89fa2e07 (local to host) Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Used Dev Size : 1465104320 (1397.23 GiB 1500.27 GB) Array Size : 2930208640 (2794.46 GiB 3000.53 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 The array was created like this. mdadm -v --create /dev/md2 \ --level=raid10 --layout=o2 --raid-devices=5 \ --chunk=64 --metadata=0.90 \ /dev/sdg2 /dev/sdf2 /dev/sde2 /dev/sdd2 /dev/sdc2 Each of the 5 individual drives have partitions like this. Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057754 Device Boot Start End Blocks Id System /dev/sdc1 2048 34815 16384 83 Linux /dev/sdc2 34816 2930243583 1465104384 fd Linux raid autodetect Backstory So the SATA controller failed in a box I provide some support for. The failure was a ugly and so individual drives fell out of the array a little at a time. While there are backups, we the are not really done as frequently as we really need. There is some data that I am trying to recover if I can. I got additional hardware and I was able to access the drives again. The drives appear to be fine, and I can get the array and filesystem active and mounted (using read-only mode). I am able to access some data on the filesystem and have been copying that off, but I am seeing lots of errors when I try to copy the most recent data. When I am trying to access that most recent data I am getting errors like below which makes me think that the array size discrepancy may be the problem. Mar 14 18:26:04 server kernel: [351588.196299] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.196309] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.196313] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.199260] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.199264] dm-7: rw=0, want=20647626304, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.202446] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.202450] dm-7: rw=0, want=19973212288, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.205516] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.205520] dm-7: rw=0, want=8009695096, limit=6442450944

    Read the article

  • Error when trying to create a faceted plot in ggplot2

    - by John Horton
    I am trying to make a faceted plot in ggplot2 of the coefficients on the regressors from two linear models with the same predictors. The data frame I constructed is this: r.together> reg coef se y 1 (Intercept) 5.068608671 0.6990873 Labels 2 goodTRUE 0.310575129 0.5228815 Labels 3 indiaTRUE -1.196868662 0.5192330 Labels 4 moneyTRUE -0.586451273 0.6011257 Labels 5 maleTRUE -0.157618168 0.5332040 Labels 6 (Intercept) 4.225580743 0.6010509 Bonus 7 goodTRUE 1.272760149 0.4524954 Bonus 8 indiaTRUE -0.829588862 0.4492838 Bonus 9 moneyTRUE -0.003571476 0.5175601 Bonus 10 maleTRUE 0.977011737 0.4602726 Bonus The "y" column is a label for the model, reg are the regressors and coef and se are what you would think. I want to plot: g <- qplot(reg, coef, facets=.~y, data = r.together) + coord_flip() But when I try to display the plot, I get: > print(g) Error in names(df) <- output : 'names' attribute [2] must be the same length as the vector [1] What's strange is that qplot(reg, coef, colour=y, data = r.together) + coord_flip() plots as you would expect.

    Read the article

  • Cake bake undefined function mysql_query on easyphp

    - by fabbrillo
    Hi, when i try to use the shell to build models cake bake M i get this error: Fatal error: Call to undefined function mysql_query() in C:\Program Files\EasyPH P-5.3.2\www\cake\cake\libs\model\datasources\dbo\dbo_mysql.php on line 588 on phpinfo(); mysql extension is enabled, i'm using mysql driver running if(!function_exists('mysql_query')) echo 'error'; else echo 'all fine'; on a separate file prints all fine but on dbo_mysql.php just before line 588 prints error i believe the database configuration is correct as on http://127.0.0.1/cake/ it says Your database configuration file is present. and Cake is able to connect to the database. i'm using the latest stable version of cakephp and easyphp on windows xp pro sp3, paths are setted correctly any idea? thank you

    Read the article

  • Proper way of deleting records with Codeigniter

    - by luckytaxi
    I came across another Stackoverflow post regarding Get vs Post and it made me think. With CI, my URL for deleting a record is http://domain.com/item/delete/100, which deletes record id 100 from my DB. The record_id is pulled via $this->uri->segment. In my model I do have a where clause that checks that the user is indeed the owner of that record. A user_id is stored in a session inside the DB. Is that good enough? My understanding is, POST should be used for one time modification for data and GET is for retrieving regards (e.g. viewing an item or permalink).

    Read the article

  • What is the structure of a (Data Access) Service Class

    - by jiewmeng
    I learnt that I should be using service classes to persist entities into the database instead of putting such logic in models/controllers. I currently made my service class something like class Application_DAO_User { protected $user; public function __construct(User $user) { $this->user = $user } public function edit($name, ...) { $this->user->name = $name; ... $this->em->flush(); } } I wonder if this should be the structure of a service class? where a service object represents a entity/model? Or maybe I should pass a User object everytime I want to do a edit like public static function edit($user, $name) { $user->name = $name; $this->em->flush(); } I am using Doctrine 2 & Zend Framework, but it shouldn't matter

    Read the article

  • kernel panic after LVM setup

    - by Manuel Sopena Ballesteros
    I broke my webserver... My setup is: VMWare ESXi environemt CPanel installed CentOS release 6.5 (Final) 4 CPUs 2G RAM 2x VM disks 100G each LVM system This was my previous storage settings (the server was working fine at this time): # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 95G 1.4G 88G 2% / tmpfs 939M 0 939M 0% /dev/shm /dev/sdb1 99G 188M 94G 1% /tmp /dev/sda1 485M 54M 407M 12% /boot My web developer asked me to merge /tmp and / disks so this is what I did: Delete /dev/sdb1 partition using fdisk Create a new partition as LVM on /dev/sdb1 using fdisk Create a new physical volume -- pvcreate /dev/sdb1 Extend volume group -- vgextend /dev/sdb1 vg_test01 Extend logical volume -- lvextend -l +100%FREE /dev/vg_test01/lv_root Resize filesystem -- resize2fs /dev/vg_test01/lv_root This is the new configuration: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 213G 105G 97G 52% / tmpfs 939M 0 939M 0% /dev/shm /dev/sda1 485M 54M 407M 12% /boot /usr/tmpDSK 4.0G 145M 3.6G 4% /tmp Since I have the new settings my web server is throwing kernel panics quite often (around every 2 days). The message says: INFO: task <taskName>:<pid> blocked for more than 120 seconds. The list of process affected that I can see from the console are: mysqld queueprocd httpd suphp vmtoolsd loop0 auditd The only way I can fix this is reseting (cold reboot) the VM. I don't think it is a hardware issue as sar is not showing any bottleneck: Linux 2.6.32-431.3.1.el6.x86_64 (test01) 08/22/2014 _x86_64_ (4 CPU) 12:00:01 AM CPU %user %nice %system %iowait %steal %idle 12:10:01 AM all 26.86 0.01 0.98 0.57 0.00 71.57 12:20:01 AM all 1.78 0.02 1.03 0.08 0.00 97.09 12:30:01 AM all 26.34 0.02 0.85 0.05 0.00 72.74 12:40:01 AM all 27.12 0.01 1.11 1.22 0.00 70.54 12:50:01 AM all 1.59 0.02 0.94 0.13 0.00 97.32 01:00:01 AM all 26.10 0.01 0.77 0.04 0.00 73.07 01:10:01 AM all 27.51 0.01 1.16 0.14 0.00 71.18 01:20:01 AM all 1.80 0.07 1.06 0.08 0.00 96.99 01:30:01 AM all 26.19 0.01 0.78 0.05 0.00 72.96 01:40:01 AM all 26.62 0.02 0.87 0.05 0.00 72.45 01:50:02 AM all 1.35 0.01 0.87 0.02 0.00 97.75 02:00:01 AM all 26.11 0.02 0.69 0.02 0.00 73.17 02:10:01 AM all 26.73 0.02 0.89 0.14 0.00 72.21 02:20:01 AM all 1.45 0.01 0.92 0.04 0.00 97.58 02:30:01 AM all 26.59 0.01 1.06 0.03 0.00 72.31 02:40:01 AM all 26.27 0.01 0.72 0.05 0.00 72.95 02:50:01 AM all 0.86 0.01 0.50 0.09 0.00 98.53 03:00:01 AM all 25.61 0.02 0.39 0.03 0.00 73.96 03:10:01 AM all 26.30 0.08 0.66 0.14 0.00 72.82 03:20:01 AM all 0.81 0.01 0.51 0.04 0.00 98.63 03:30:02 AM all 26.15 0.02 0.53 0.07 0.00 73.24 03:40:01 AM all 26.06 0.01 0.47 0.04 0.00 73.42 03:50:01 AM all 0.96 0.02 0.51 0.03 0.00 98.48 Average: all 17.69 0.02 0.79 0.14 0.00 81.36 06:58:14 AM LINUX RESTART 07:00:01 AM CPU %user %nice %system %iowait %steal %idle 07:10:01 AM all 1.04 0.02 0.57 0.95 0.00 97.42 07:20:02 AM all 0.66 0.01 0.39 0.06 0.00 98.87 07:30:01 AM all 25.71 0.01 0.45 0.16 0.00 73.67 07:40:01 AM all 25.88 0.01 0.35 0.08 0.00 73.68 07:50:01 AM all 1.13 0.02 0.55 0.11 0.00 98.19 As you can see the server became unresponsive at 03.50 AM and I had to reset the VM at 06.58 AM to bring the website up again. I would appreciate any help/assistance to fix this issue. thank you very much

    Read the article

  • Why do I see a large performance hit with DRBD?

    - by BHS
    I see a much larger performance hit with DRBD than their user manual says I should get. I'm using DRBD 8.3.7 (Fedora 13 RPMs). I've setup a DRBD test and measured throughput of disk and network without DRBD: dd if=/dev/zero of=/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 4.62985 s, 116 MB/s / is a logical volume on the disk I'm testing with, mounted without DRBD iperf: [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec According to Throughput overhead expectations, the bottleneck would be whichever is slower, the network or the disk and DRBD should have an overhead of 3%. In my case network and I/O seem to be pretty evenly matched. It sounds like I should be able to get around 100 MB/s. So, with the raw drbd device, I get dd if=/dev/zero of=/dev/drbd2 bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 6.61362 s, 81.2 MB/s which is slower than I would expect. Then, once I format the device with ext4, I get dd if=/dev/zero of=/mnt/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 9.60918 s, 55.9 MB/s This doesn't seem right. There must be some other factor playing into this that I'm not aware of. global_common.conf global { usage-count yes; } common { protocol C; } syncer { al-extents 1801; rate 33M; } data_mirror.res resource data_mirror { device /dev/drbd1; disk /dev/sdb1; meta-disk internal; on cluster1 { address 192.168.33.10:7789; } on cluster2 { address 192.168.33.12:7789; } } For the hardware I have two identical machines: 6 GB RAM Quad core AMD Phenom 3.2Ghz Motherboard SATA controller 7200 RPM 64MB cache 1TB WD drive The network is 1Gb connected via a switch. I know that a direct connection is recommended, but could it make this much of a difference? Edited I just tried monitoring the bandwidth used to try to see what's happening. I used ibmonitor and measured average bandwidth while I ran the dd test 10 times. I got: avg ~450Mbits writing to ext4 avg ~800Mbits writing to raw device It looks like with ext4, drbd is using about half the bandwidth it uses with the raw device so there's a bottleneck that is not the network.

    Read the article

  • Show choosen option in a notification Feed, Django

    - by apoo
    Hey I have a model where : LIST_OPTIONS = ( ('cheap','cheap'), ('expensive','expensive'), ('normal', 'normal'), ) then I have assigned the LIST_OPTIONS to nature variable. nature = models.CharField(max_length=15, choices=LIST_OPTIONS, null=False, blank=False). then I save it: if self.pk: new=False else: new=True super(Listing, self).save(force_insert, force_update) if new and notification: notification.send(User.objects.all().exclude(id=self.owner.id), "listing_new", {'listing':self, }, ) then in my management.py: def create_notice_types(app, created_models,verbosity, **kwargs): notification.create_notice_type("listing_new", _("New Listing"), _("someone has posted a new listing"), default=2) and now in my notice.html I want to show to users different sentences based on the options that they have choose so something like this: LINK href="{{ listing.owner.get_absolute_url }} {{listing.owner}} {% ifequal listing.nature "For Sale" %} created a {{ listing.nature }} listing, <a href="{{ listing.get_absolute_url }}">{{listing.title}}</a>. {% ifequals listing.equal "Give Away"%} is {{ listing.nature }} , LINK href="{{ listing.get_absolute_url }}" {{listing.title}}. {% ifequal listing.equal "Looking For"%} is {{ listing.nature }} , LINK href="{{ listing.get_absolute_url }}" {{listing.title}} {% endifequal %} {% endifequal %} {% endifequal %} Could you please help me out with this. Thank you

    Read the article

  • MySql timeouts - Should I set autoReconnect=true in Spring application?

    - by George
    After periods of inactivity on my website (Using Spring 2.5 and MySql), I get the following error: org.springframework.dao.RecoverableDataAccessException: The last packet sent successfully to the server was 52,847,830 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. According to this question, and the linked bug, I shouldn't just set autoReconnect=true. Does this mean I have to catch this exception on any queries I do and then retry the transaction? Should that logic be in the data access layer, or the model layer? Is there an easy way to handle this instead of wrapping every single query to catch this?

    Read the article

  • ASP.NET MVC ajax - data transfer

    - by Grienders
    How can I get result from action? I need to show the commentID on the page (aspx) after successes comment insert. controller [AcceptVerbs(HttpVerbs.Post )] public ActionResult ShowArticleByAjax(Guid id, string commentBody) { Guid commentID = Comment.InsertComment(id, commentBody); //How can I tranfer commentID to the aspx page ??? return PartialView("CommentDetails",Article.GetArticleByID(id)); } ascx <%using (Ajax.BeginForm("ShowArticleByAjax", new { id = Model.ID }, new AjaxOptions { HttpMethod = "Post", UpdateTargetId = "divCommentDetails", OnSuccess = "successAddComment", OnFailure = "failureAddComment", OnBegin = "beginAddComment" })) { %> <p> <%=Html.TextArea("commentBody", new { cols = "100%", rows = "10" })%> </p> <p> <input name="submit" type="image" src="../../Content/Images/Design/button_s.gif" id="submit" /> </p> <%} %> aspx doesn't matter

    Read the article

  • Using DataAnnotations with Entity Framework

    - by dcompiled
    I have used the Entity Framework with VS2010 to create a simple person class with properties, firstName, lastName, and email. If I want to attach DataAnnotations like as is done in this blog post I have a small problem because my person class is dynamically generated. I could edit the dynamically generated code directly but any time I have to update my model all my validation code would get wiped out. First instinct was to create a partial class and try to attach annotations but it complains that I'm trying to redefine the property. I'm not sure if you can make property declarations in C# like function declarations in C++. If you could that might be the answer. Here's a snippet of what I tried: namespace PersonWeb.Models { public partial class Person { [RegularExpression(@"(\w|\.)+@(\w|\.)+", ErrorMessage = "Email is invalid")] public string Email { get; set; } /* ERROR: The type 'Person' already contains a definition for 'Email' */ } }

    Read the article

  • Noise Estimation / Noise Measurement in Image

    - by Drazick
    Hello. I want to estimate the noise in an image. Let's assume the model of an Image + White Noise. Now I want to estimate the Noise Variance. My method is to calculate the Local Variance (3*3 up to 21*21 Blocks) of the image and then find areas where the Local Variance is fairly constant (By calculating the Local Variance of the Local Variance Matrix). I assume those areas are "Flat" hence the Variance is almost "Pure" noise. Yet I don't get constant results. Is there a better way? Thanks.

    Read the article

  • C#: Semantics for generics?

    - by Rosarch
    I have a list: private readonly IList<IList<GameObjectController>> removeTargets; PickUp inherits from GameObjectController. But when I try this: public IList<PickUp> Inventory // ... gameObjectManager.MoveFromListToWorld(this, user.Model.Inventory); I get a compiler error: cannot convert from 'System.Collections.Generic.IList' to 'System.Collections.Generic.IList' Why does this occur? Shouldn't this be fine, since PickUp is a subclass of GameObjectController? Do I need something like Java's Map<E extends GameObjectController>? Earlier, I was having a similar problem, where I was trying to implicitly cast inventory from an IList to an ICollection. Is this the same problem?

    Read the article

  • Best Practise: DNS and VPN (with private network IPs)

    - by ribx
    I am trying to find the best solution for my DNS problem. We are running several services in our company that you can reach only over VPN. Other services, that are reachable through the internet got the domain ... At the moment all services inside the VPN network go by .local... These have an VPN IP of the private network 192.168.252.0/24. Clients reach from Linux over OSX to Windows. I can think of 4 possibilities to implement a DNS infrastructure: Most common: an internal DNS Server, that is pushed by the VPN. But this has several drawbacks: your DNS responses are limited to the speed of the VPN Connection and your own DNS server. Because of very complex websites, this can increase the time for a page to load quite a lot. Also: we have several VPNs that are not connected to each other and all of them have their own DNS server. Several DNS servers locally. These have to be configured by hand. And you have to use some third party tool like dnsmasq. If you start a DNS request, you ask your locally running DNS server, which decides which server to ask for which domain name. One college of mine uses such a solution with this OSX (I am sorry, I don't remember the name of the application). You use your domain hoster. Most of them have APIs available to manipulate your DNS entries. So you could pull your private network informations to your domain hoster. I am not sure whether they all accept private network IPs. But I guess there will be some problems in the same way as in number 4. The one we currently use, because it's for us the most logical choice: we forward the sub domain *.local.. to our own public DNS Server. This works quite good for some public DNS Servers like Google. But most ISPs do not forward the answers. Or don't do that always. Like my ISP sends me a positive result of the a DNS request of a *.local.. domain only every 10th time I make a nslookup. (Can someone explain this?) Here the real Question: Is there another solution we were not thinking about? Or: What of these methods do you use?

    Read the article

  • False Duplicate Key Error in Entity Framework Application

    - by ProfK
    I have an ASP.NET application using an entity framework model. In an import routine, with the code below, I get a "Cannot insert duplicate key" exception for AccountNum on the SaveChanges call, but when execution stops for the exception, I can query the database for the apparently duplicated field, and no prior record exists. using (var ents = new PvmmsEntities()) { foreach (DataRow row in importedResources.Rows) { var empCode = row["EmployeeCode"].ToString(); try { var resource = ents.ActivationResources.FirstOrDefault(rs => rs.EmployeeCode == empCode); if (resource == null) { resource = new ActivationResources(); resource.EmployeeCode = empCode; ents.AddToActivationResources(resource); } resource.AccountNum = row["AccountNum"].ToString(); ents.SaveChanges(true); } catch(Exception ex) { } } }

    Read the article

  • naive bayesian spam filter question

    - by Microkernel
    Hi guys, I am planning to implement spam filter using Naive Bayesian classification model. Online I see a lot of info on Naive Bayesian classification, but the problem is its a lot of mathematical stuff, than clearly stating how its done. And the problem is I am more of a programmer than a mathematician (yes I had learnt Probability and Bayesian theorem back in school, but out of touch for a long long time, and I don't have luxury of learning it now (Have nearly 3 weeks to come-up with a working prototype)). So if someone can explain or point me to location where its explained for programmers than a mathematician, it would be a great help. PS: By the way I have to implement it in C, if you want to know. :( Regards, Microkernel

    Read the article

  • Creating a form for editing embedded documents with MongoMapper

    - by Luke Francl
    I'm playing around with MongoMapper but I'm having trouble figuring out how to create a form for an object that has embedded documents. With ActiveRecord, I'd use fields_for but when asked if this would be supported a few months ago, MongoMapper author John Nunemaker wrote: "Nope and nope. It is really [not] that hard with attr_accessor's." OK, fair enough, but how do you write the form for this to work? I'm not interested in using the nested form implementations that are out there because I want to do this the "normal" way as I'm learning about MongoMapper. My model is simple enough - I've got a Person with embedded documents for email addresses, phone numbers, etc. I do not care about updating existing embedded documents. They can be re-created from the form input each time a Person is edited.

    Read the article

  • Determining the health of a Cisco switch port?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced recently, however, the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link. We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • django app using amazon aws s3 storage in stead of DB?

    - by farble1670
    new to python here so bear with me ... i'm looking at django for a rapid prototype to a photo sharing app with an amazon aws s3 storage back end. however, as far as i can tell, django is tailored toward the typical database MVC type of pattern. is there a way to for example provide a custom django model implementation that talks to s3 in stead of a DB? a custom DB engine? would either of these be practical, or am i looking in the wrong direction? thanks.

    Read the article

  • Rails migration to add boolean column to Postgres on Heroku

    - by pmc255
    I'm trying to execute a simple Rails migration to add a boolean column to an existing table. Here's the add_column call: add_column :users, :soliciting, :boolean, :null => false, :default => false However, after the migration runs (successfully, with no errors), I don't see the new column. If I go into the console and list the columns on the User table, for example, with this command: >> User.columns.each { |c| puts "#{c.name} : #{c.type}" } All the other columns show up, but not the one I just added with the migration. What's even more strange is that looking up a random user object yields the Postgres version of booleans (Ruby strings) >> User.find(1).soliciting => "t" However, the existing boolean columns all show up with standard Ruby boolean values of true and false. What's going on here? Is the migration actually complete? Why doesn't the column show up, yet is accessible in the model objects?

    Read the article

  • How to : required validator based on user role ASP.Net MVC 3

    - by user70909
    Hi, i have a form where i have the field "Real Cost" i want to customize its appearance and wither it should be validated based on user role. to be more clear is say the client want to show his field in the form or details page and also make it editable for users in Roles "Senior Sales, Manager" but not other roles, so can anyone please guide me of the best way ? should i write custom required validation based on user in role, and if so can you please provide the right implementation of it? some may tell me create custom model for this, but i think it would be hassle plus the Roles will be dynamic so it is not predefined set of roles. i hope i was clear enough

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >