Search Results

Search found 94669 results on 3787 pages for 'anonymous one'.

Page 594/3787 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • Trouble with IIS SMTP relaying to Gmail

    - by saille
    I appreciate that similar questions have been asked about how to setup SMTP relaying with IIS's virtual SMTP server. However I'm still completely stumped on this problem. Here's the setup: IIS 6.0 SMTP server running on Win2k3 box with a NAT'ed IP. Company uses Gmail for all email services. An app on the box needs to send email, so normally we'd just set the app up to talk to smtp.gmail.com directly, but this app doesn't support TLS. Easy, we just setup a local SMTP relay right? So I thought. What we have done so far: Setup IIS SMTP server to relay to smtp.gmail.com, as per these excellent instructions: http://fmuntean.wordpress.com/2008/10/26/how-to-configure-iis-smtp-server-to-forward-emails-using-a-gmail-account/ The local SMTP relay allows anonymous access. Both the local IP and the loopback IP have been explicitly allowed in the Connection... and Relay... dialogs. Tried sending email from 2 different apps via the local SMTP server, but failed (the emails end up in the Queue folder, but never get sent). The IIS logs show the conversation with the local app, but zero conversation happening with smtp.gmail.com. The port used by gmail is open outbound, and indeed the apps we have that support TLS can send email directly via smtp.gmail.com, so there is no problem with the network. At this point I changed the smtp settings in IIS SMTP server to use a different external SMTP server and hey-presto, the local apps can send email via local IIS SMTP relay. So smtp.gmail.com fails to work with our IIS SMTP relay, but another 3rd party SMTP service works fine. We need to use smtp.gmail.com, so how to troubleshoot this one?

    Read the article

  • Can't access IIS 7 server URL from the same IIS 7 server.

    - by Kevin Raffay
    We have an intranet site ie, xxx.yyyy.com, that users access by entering "http"://xxx.yyy.com. Our problems started when we migrated to IIS 7 running on a new 2003 server. We got rid of our single-sign on code and implemented a security model where we capture a user's domain credentials which we then authenticate against a DB. In order to get the domain credentials passed to our ASP.NET app, we have the following settings: Anonymous Authentication:Disabled ASP.NET Impersonation: Enabled Basic/Digest/Forms Authentication: Disabled Windows Authentication: Enabled We allow "*" and deny "?" in the web.config. Browsing "http"://xxx.yyy.com from any client PC results in a domain login prompt, and if your enter a proper user/pwd, you can get in. However, browsing "http"://xxx.yyy.com while remoting into the server results in 3 domain login prompts and eventually a 401 error - unauthorized. We have traced this behavior to problems with our web site where we have pages doing "screen scraping" using the HttpRequest calling a url on the same server. When doing a HttpRequest from any other client, using a test harness that passes authorized credentials, all is good. So internal HttpRequest calls on the server fail, just like attempts to browse that server's url from within a remote session. Why would a to "http"://xxx.yyy.com on server xxx.yyy.com fail authentication?

    Read the article

  • How do you guys handle custom yum repository?

    - by luckytaxi
    I have a bunch of tools (nagios, munin, puppet, etc...) that gets installed on all my servers. I'm in the process of building a local yum repository. I know most folks just dump all the rpms into a single folder (broken down into the correct path) and then run createrepo inside the directory. However, what would happen if you had to update the rpms? I ask because I was going to throw each software into its own folder. Example one, put all packages inside one folder (custom_software) /admin/software/custom_software/5.4/i386 /admin/software/custom_software/5.4/x86_64 /admin/software/custom_software/4.6/i386 /admin/software/custom_software/4.6/x86_64 What I'm thinking of ... /admin/software/custom_software/nagios/5.4/i386 /admin/software/custom_software/nagios/5.4/x86_64 /admin/software/custom_software/nagios/4.6/i386 /admin/software/custom_software/nagios/4.6/x86_64 /admin/software/custom_software/puppet/5.4/i386 /admin/software/custom_software/puppet/5.4/x86_64 /admin/software/custom_software/puppet/4.6/i386 /admin/software/custom_software/puppet/4.6/x86_64 Ths way, if I had to update to the latest version of puppet, I can save manage the files accordingly. I wouldn't know which rpms belong to which software if I threw them into one big folder. Makes sense?

    Read the article

  • Lots of questions about file I/O (reading/writing message strings)

    - by Nazgulled
    Hi, For this university project I'm doing (for which I've made a couple of posts in the past), which is some sort of social network, it's required the ability for the users to exchange messages. At first, I designed my data structures to hold ALL messages in a linked list, limiting the message size to 256 chars. However, I think my instructors will prefer if I save the messages on disk and read them only when I need them. Of course, they won't say what they prefer, I need to make a choice and justify the best I can why I went that route. One thing to keep in mind is that I only need to save the latest 20 messages from each user, no more. Right now I have an Hash Table that will act as inbox, this will be inside the user profile. This Hash Table will be indexed by name (the user that sent the message). The value for each element will be a data structure holding an array of size_t with 20 elements (20 messages like I said above). The idea is to keep track of the disk file offsets and bytes written. Then, when I need to read a message, I just need to use fseek() and read the necessary bytes. I think this could work nicely... I could use just one single file to hold all messages from all users in the network. I'm saying one single file because a colleague asked an instructor about saving the messages from each user independently which he replied that it might not be the best approach cause the file system has it's limits. That's why I'm thinking of going the single file route. However, this presents a problem... Since I only need to save the latest 20 messages, I need to discard the older ones when I reach this limit. I have no idea how to do this... All I know is about fread() and fwrite() to read/write bytes from/to files. How can I go to a file offset and say "hey, delete the following X bytes"? Even if I could do that, there's another problem... All offsets below that one will be completely different and I would have to process all users mailboxes to fix the problem. Which would be a pain... So, any suggestions to solve my problems? What do you suggest?

    Read the article

  • "Modern" Ethernet over coax

    - by Electrons_Ahoy
    So, I've just bought a house. It's reasonably new - built in the early '00s. One of the features that got built in was a cable TV drop in every room. The cabling is gorgeous - there's even a wiring cabinet of sorts in a closet where the cables all tie together to the splitter to the outside line. Of course, my problem is that I only own the one TV. I do, however, own a few computers. What I would love to be able to do is drop a switch in the wiring closet and run 100/1000BASE-T ethernet over the coax in the walls I wouldn't otherwise be using. My fantasy would be if you could get some kind of adapter-plug-thing that would take a coax plug on one side and a cat5/RJ45 plug on the other. Had anyone else done this? Any suggestions? (There are a few other options that suggest themselves - first, I could just use the existing cabling channels and re-run cat5 or 6 through the walls. While tempting, that sounds like more work than I really want to put in, so I'm calling that Plan B. Also, I could just scare up a mess of old 10BASE2 cards and run the house on thinnet, all mid-90s style. While I think I'd get major style points for that, I don't think I can get a 10BASE2 adapter for the new laptop. Also, I have all these super-snazzy gigabit adaptors I'd like to be using. And so forth.)

    Read the article

  • Windows mobile Compact Framework SqlCeConnection

    - by pdiddy
    I'm hearing that is better to have one connection open upon app start up and closing it when the app shuts down. What kind of issue can occur having multiple connections ? Any articles out there that it is best practices to have one connection? What are your experience with sql ce?

    Read the article

  • RAID1 Broken Mirroring

    - by Sanoj
    I'm having a little server with Windows Small Business Server 2003. I'm using RAID1, via a HighPoint Rocket RAID 1640 RAID-card, using two harddrives. This week the server alarmed, and durig reboot I got the error-message Broken Mirroring (User Manual page 30). I had a few alternatives (see the manual), first I tried Continue, but the server restarted during boot. Next time I took Power Off, and replaced the oldest harddrive with a new one, and when I booted, I selected Rebuild. Then I selected the new harddrive to be the new one. The rebuild-procedure started and a progress bar at 0% showed up, but after a few seconds I got the message Copy Failed!, then the server booted and Windows Server started. Now it works fine. But I guess that I'm just using one harddrive now, and it's not mirrored. I haven't touched the server since then (two days ago). What should I do now? I have no experience of this situation. Anyone that have some guidance?

    Read the article

  • Does Objective-C support Mixin like Ruby?

    - by hacksignal
    In Ruby, there's Modules and you can extend a class by "mixing-in" the module. Module myModule def printone print "one" end end Class myClass extend myModule end theOne = myClass.new theOne.printone >> one In Objective-C, I find that I have a set of common methods that I want a number of Class to "inherit". What other ways can I achieve this without creating a common class and deriving all from that common class?

    Read the article

  • How to update a user created Bitmap in the Windows API

    - by gamernb
    In my code I quickly generate images on the fly, and I want to display them as quickly as possible. So the first time I create my image, I create a new BITMAP, but instead of deleting the old one and creating a new one for every subsequent image, I just want to copy my data back into the existing one. Here is my code to do both the initial creation and the updating. The creation works just fine, but the updating one doesn't work. BITMAPINFO bi; HBITMAP Frame::CreateBitmap(HWND hwnd, int tol1, int tol2, bool useWhite, bool useBackground) { ZeroMemory(&bi.bmiHeader, sizeof(BITMAPINFOHEADER)); bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); bi.bmiHeader.biWidth = width; bi.bmiHeader.biHeight = height; bi.bmiHeader.biPlanes = 1; bi.bmiHeader.biBitCount = 24; bi.bmiHeader.biCompression = BI_RGB; ZeroMemory(bi.bmiColors, sizeof(RGBQUAD)); // Allocate memory for bitmap bits int size = height * width; Pixel* newPixels = new Pixel[size]; // Recompute the output //memcpy(newPixels, pixels, size*3); ComputeOutput(newPixels, tol1, tol2, useWhite, useBackground); HBITMAP bitmap = CreateDIBitmap(GetDC(hwnd), &bi.bmiHeader, CBM_INIT, newPixels, &bi, DIB_RGB_COLORS); delete newPixels; return bitmap; } and void Frame::UpdateBitmap(HWND hwnd, HBITMAP bitmap, int tol1, int tol2, bool useWhite, bool useBackground) { ZeroMemory(&bi.bmiHeader, sizeof(BITMAPINFOHEADER)); bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); HDC hdc = GetDC(hwnd); if(!GetDIBits(hdc, bitmap, 0, bi.bmiHeader.biHeight, NULL, &bi, DIB_RGB_COLORS)) MessageBox(NULL, "Can't get base image info!", "Error!", MB_ICONEXCLAMATION | MB_OK); // Allocate memory for bitmap bits int size = height * width; Pixel* newPixels = new Pixel[size]; // Recompute the output //memcpy(newPixels, pixels, size*3); ComputeOutput(newPixels, tol1, tol2, useWhite, useBackground); // Push back to windows if(!SetDIBits(hdc, bitmap, 0, bi.bmiHeader.biHeight, newPixels, &bi, DIB_RGB_COLORS)) MessageBox(NULL, "Can't set pixel data!", "Error!", MB_ICONEXCLAMATION | MB_OK); delete newPixels; } where the Pixel struct is just this: struct Pixel { unsigned char b, g, r; }; Why does my update function not work. I always get the MessageBox for "Can't set pixel data!" I used code similar to this when I was loading in the original bitmap from file, then editing the data, but now when I manually create it, it doesn't work.

    Read the article

  • Prior jQuery UI Dialogs become nonresponsive after opening a new Dialog

    - by Euwyn
    I'm having issues with multiple jQuery dialogs. The first one opens fine - is resizable, draggable, etc. However, when I open a second the first becomes unresponsive to dragging/moving/closing, even after the second one is closed. What is the reason for this and how can it be fixed? According to the jQuery documentation this should work fine (since stacking is supported).

    Read the article

  • Traceroute Theory

    - by Hamza Yerlikaya
    I am toying with trace route, my application send a ICMP echo request with a ttl of 0 every time i receive a time exceeded message i increment the ttl by one and resent the package, but what happens is I have 2 routers on my network i can trace the route through these router but third hop always ends up being one of the open dns servers same ip every time no matter where i traceroute to. AFAIK this is the correct traceroute implementation, can anyone tell me what i am doing wrong?

    Read the article

  • Using MongoDB with Ruby On Rails and the Mongomapper plugin

    - by Micke
    Hello, i am currently trying to learn Ruby On Rails as i am a long-time PHP developer so i am building my own community like page. I have came pritty far and have made the user models and suchs using MySQL. But then i heard of MongoDB and looked in to it a little bit more and i find it kinda nice. So i have set it up and i am using mongomapper for the connection between rails and MongoDB. And i am now using it for the News page on the site. I also have a profile page for every User which includes their own guestbook so other users can come to their profile and write a little message to them. My thought now is to change the User models from using MySQL to start using MongoDB. I can start by showing how the models for each User is set up. The user model: class User < ActiveRecord::Base has_one :guestbook, :class_name => "User::Guestbook" The Guestbook model model: class User::Guestbook < ActiveRecord::Base belongs_to :user has_many :posts, :class_name => "User::Guestbook::Posts", :foreign_key => "user_id" And then the Guestbook posts model: class User::Guestbook::Posts < ActiveRecord::Base belongs_to :guestbook, :class_name => "User::Guestbook" I have divided it like this for my own convenience but now when i am going to try to migrate to MongoDB i dont know how to make the tables. I would like to have one table for each user and in that table a "column" for all the guestbook entries since MongoDB can have a EmbeddedDocument. I would like to do this so i just have one Table for each user and not like now when i have three tables just to be able to have a guestbook. So my thought is to have it like this: The user model: class User include MongoMapper::Document one :guestbook, :class_name => "User::Guestbook" The Guestbook model model: class User::Guestbook include MongoMapper::EmbeddedDocument belongs_to :user many :posts, :class_name => "User::Guestbook::Posts", :foreign_key => "user_id" And then the Guestbook posts model: class User::Guestbook::Posts include MongoMapper::EmbeddedDocument belongs_to :guestbook, :class_name => "User::Guestbook" But then i can think of one problem.. That when i just want to fetch the user information like a nickname and a birthdate then it will have to fetch all the users guestbook posts. And if each user has like a thousand posts in the guestbook it will get really much to fetch for the system. Or am i wrong? Do you think i should do it any other way? Thanks in advance and sorry if i am hard to understand but i am not so educated in the english language :)

    Read the article

  • Always use multiple cores (/MP) flag with Visual Studio?

    - by dwj
    I'm using Visual Studio 2008 on my main build system. I've been playing with Visual Studio 2010 on another one. It appears that the tool still only wants to use one core when compiling unless you specify the /MP switch in the compiler switches (see http://stackoverflow.com/questions/1422601/how-do-i-turn-on-multi-cpu-core-c-compiles-in-the-visual-studio-ide-2008). I have to do this for every project. Is there a way to make VS always do this?

    Read the article

  • singleton vs factory?

    - by fayer
    i've got 3 Log classes that all implements iLog interface: DatabaseLog FileLog ScreenLog there can only be one instance of them. initially i though of using single pattern for each class but then i thought why not use a factory for instantiation instead, cause then i wont have to create single pattern for each one of them and for all future Log classes. and maybe someone would want them as multiple objects in the future. so my questions is: should i use factory or singleton pattern here?

    Read the article

  • parse json with ant

    - by paleozogt
    I have an ant build script that needs to pull files down from a web server. I can use the "get" task to pull these files down one by one. However, I'd like to be able to get a list of these files first and then iterate over the list with "get" to download the files. The webserver will report the list of files in json format, but I'm not sure how to parse json with ant. Are there any ant plugins that allow for json parsing?

    Read the article

  • Using GitOAuthPlugin for Jenkins - not working as expected

    - by Blundell
    I need some clarity and maybe a fix. I'm using this plugin to authorise who views our Jenkins ci server: https://wiki.jenkins-ci.org/display/JENKINS/Github+OAuth+Plugin As I understand it anyone who is auth'd to view one of our github project's can also login to our Jenkins box. This works I thought it would also allow the person logging in to only view the Project that they have GitHub permission on. For instance. Three projects on GitHub (A,B,C). Three builds on Jenkins. User 1 has Git access to all 3 projects (A B C). User 2 has Git access to only 1 project (A). When logging into Jenkins: User 1 can see all 3 projects ( this works ) User 2 can only see project A The problem is User 2 can also see all 3 projects when they should only see 1! Have I got this correct, and if so is this a bug? I have the settings set in Jenkins configuration Github Authorization Settings. Here we have some admin users. One organization. And none out of the 4 checkboxes ticked. (User 2, is not an admin, is not part of the org). The plugin is open sourced here: https://github.com/mocleiri/github-oauth-plugin I was trying to get Jenkins to print me the Logs from the plugin but I also failed at viewing these (to see if there was an issue). I followed these instructions: https://wiki.jenkins-ci.org/display/JENKINS/Logging It's the same concept as outlined below but using GitHub rather than manually selecting users: https://wiki.jenkins-ci.org/display/JENKINS/2012/01/03/Allow+access+to+specific+projects+for+Users%28Assigning+security+for+projects+in+Jenkins%29 Have I got this right or wrong? Is it possible to auth a Jenkins user to only see one project?

    Read the article

  • PHP - format bytes to kilobytes, megabytes, gigabytes

    - by pygorex1
    Scenario: the size of various files are stored in a database as bytes. What's the best way to format this size info to kilobytes, megabytes and gigabytes? For instance I have an MP3 that Ubuntu displays as "5.2 MB (5445632 bytes)". How would I display this on a web page as "5.2 MB" AND have files less than one megabyte display as KB and files one gigabyte and above display as GB?

    Read the article

  • IPCop server slows down download speed

    - by noocyte
    I have an IPCop server running at home, been doing just fine for ~5 months, but last week I suddenly started getting time-outs and slow downloads from the 'net. I first thought that this was my ISP acting up, then I thought it might be one of my 3 switches or some of my cabling. In due order I've tested everything above and found them all to be working as they should. The only factor remaining is my IPCop server. Facts: I've got a 15/15 Mbit line (fiber) and I get ~15 Mbit upload, but only 0.5 Mbit download with the IPCop box as router (ISP router set in bridge mode). If I connect without the IPCop box (using the ISP router) I get ~12 Mbit upload and ~15 Mbit download. The load on the IPCop box appears to be light and it used to handle this traffic just fine 2 weeks ago. The memory usage is ~60%, I tried to restart it and test again, the memory fell to ~50% then (5 months of uptime). I'm thinking that one of my nics are busted, but I'm sort of perplexed that this could be the outcome; slow download but full speed upload. Anybody ever seen that happening before? Could it just be one of the nics that needs to be replaced? Will try that as soon as I can get my hands on a couple of new ones.

    Read the article

  • Why do weekly tasks created via PowerShell using a different user fail with error 0x41306

    - by Danny Tuppeny
    We have some scripts that create scheduled jobs using PowerShell as part of our application. When testing them recently, I noticed that some of them always failed immediately, and no output is ever produced (they don't even appear in the Get-Job list). After many days of tweaking, we've managed to isolate it to any jobs that are set to run weekly. Below is a script that creates two jobs that do exactly the same thing. When we run this on our domain, and provide credentials of a domain user, then force both jobs to run in the Task Scheduler GUI (right-click - Run), the daily one runs fine (0x0 result) and the weekly one fails (0x41306). Note: If I don't provide the -Credential param, both jobs work fine. The jobs only fail if the task is both weekly, and running as this domain user. I can't find information on why this is happening, nor think of any reason it would behave differently for weekly jobs. The "History£ tab in the Task Scheduler has almost no useful information, just "Task stopping due to user request" and "Task terminated", both of which have no useful info: Task Scheduler terminated "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" instance of the "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" task. Task Scheduler stopped instance "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" of task "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" as request by user "MyDomain\SomeUser" . What's up with this? Why do weekly tasks run differently, and how can I diganose this issue? This is PowerShell v3 on Windows Server 2008 R2. I've been unable to reproduce this locally, but I don't have a user set up in the same way as the one in our production domain (I'm working on this, but I wanted to post this ASAP in the hope someone knows what's happening!). Import-Module PSScheduledJob $Action = { "Executing job!" } $cred = Get-Credential "MyDomain\SomeUser" # Remove previous versions (to allow re-running this script) Get-ScheduledJob Test1 | Unregister-ScheduledJob Get-ScheduledJob Test2 | Unregister-ScheduledJob # Create two identical jobs, with different triggers Register-ScheduledJob "Test1" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Weekly -At 1:25am -DaysOfWeek Sunday) Register-ScheduledJob "Test2" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Daily -At 1:25am)

    Read the article

  • Post parameters are becoming null (at random)

    - by Raghuram Duraisamy
    Hi, My web application is developed with Struts2 and it was working fine till recently. All of a sudden one of the modules has started malfunctioning. The malfunctioning module is 'Update Student details' page. This page has a lot of fields like 'schoolName', 'degreeName', etc . School 1: <input name="schoolName"> School 2: <input name="schoolName"> ..... School n: <input name="schoolName"> As mentioned earlier, the page was working perfectly fine till recently. Now, one/many of the values of 'schoolName', 'degreeName', etc are being received as "" (EMPTY STRING) on the server-side. For debugging, I used firebug and remote-debugging in eclipse. I find that the post-parameters are correct on the client-side. For instance, during one of the submissions the post-parameters were as below (i noted them from firebug). Content-Type: multipart/form-data; boundary=---------------------------2921238217421 Content-Length: 48893 <OTHER_PARAMETERS> <!--Truncated for clarity --> -----------------------------2921238217421 Content-Disposition: form-data; name="schoolName" ABC Institute -----------------------------2921238217421 Content-Disposition: form-data; name="schoolName" Test School -----------------------------2921238217421 Content-Disposition: form-data; name="schoolName" XYZ -----------------------------2921238217421 Content-Disposition: form-data; name="schoolName" Texas Institute -----------------------------2921238217421 Content-Disposition: form-data; name="schoolName" XXXX School -----------------------------2921238217421-- But on the server-side, the request params were as below: schoolName=[ABC Institute, Test School, XYZ, , XXXX School], "Texas Institute" was received as "" (EMPTY STRING) in this particular case. This is not happening consistently. The parameters that become NULL (or EMPTY STRING) seem random to me - during one instance, parameter schoolName[3] became null as illustrated above, parameter schoolName[2] became null during yet another submission, etc. At times, none of the parameters are nullified. The following is the list of the interceptors in the action definition. List of interceptors: ---------------------- FileUploadInterceptor org.apache.struts2.interceptor.FileUploadInterceptor ServletConfigInterceptor org.apache.struts2.interceptor.ServletConfigInterceptor StaticParametersInterceptor com.opensymphony.xwork2.interceptor.StaticParametersInterceptor ParametersInterceptor com.opensymphony.xwork2.interceptor.ParametersInterceptor MyCustomInterceptor com.xxxx.yyyy.interceptors.GetLoggedOnUserInterceptor This issue appears rather weird to me and I have not been able to zero-in on the exact cause of the issue. Any help in this regard would be highly appreciated. Thanks in advance. Thanks, Raghuram

    Read the article

  • Deletes not cascading for self-referencing entities

    - by jwaddell
    I have the following (simplified) Hibernate entities: @Entity @Table(name = "package") public class Package { protected Content content; @OneToOne(cascade = {javax.persistence.CascadeType.ALL}) @JoinColumn(name = "content_id") @Fetch(value = FetchMode.JOIN) public Content getContent() { return content; } public void setContent(Content content) { this.content = content; } } @Entity @Table(name = "content") public class Content { private Set<Content> subContents = new HashSet<Content>(); private ArchivalInformationPackage parentPackage; @OneToMany(fetch = FetchType.EAGER) @JoinTable(name = "subcontents", joinColumns = {@JoinColumn(name = "content_id")}, inverseJoinColumns = {@JoinColumn(name = "elt")}) @Cascade(value = {org.hibernate.annotations.CascadeType.DELETE, org.hibernate.annotations.CascadeType.REPLICATE}) @Fetch(value = FetchMode.SUBSELECT) public Set<Content> getSubContents() { return subContents; } public void setSubContents(Set<Content> subContents) { this.subContents = subContents; } @ManyToOne(cascade = {CascadeType.ALL}) @JoinColumn(name = "parent_package_id") public Package getParentPackage() { return parentPackage; } public void setParentPackage(Package parentPackage) { this.parentPackage = parentPackage; } } So there is one Package, which has one "top" Content. The top Content links back to the Package, with cascade set to ALL. The top Content may have many "sub" Contents, and each sub-Content may have many sub-Contents of its own. Each sub-Content has a parent Package, which may or may not be the same Package as the top Content (ie a many-to-one relationship for Content to Package). The relationships are required to be ManyToOne (Package to Content) and ManyToMany (Content to sub-Contents) but for the case I am currently testing each sub-Content only relates to one Package or Content. The problem is that when I delete a Package and flush the session, I get a Hibernate error stating that I'm violating a foreign key constraint on table subcontents, with a particular content_id still referenced from table subcontents. I've tried specifically (recursively) deleting the Contents before deleting the Package but I get the same error. Is there a reason why this entity tree is not being deleted properly? EDIT: After reading answers/comments I realised that a Content cannot have multiple Packages, and a sub-Content cannot have multiple parent-Contents, so I have modified the annotations from ManyToOne and ManyToMany to OneToOne and OneToMany. Unfortunately that did not fix the problem. I have also added the bi-directional link from Content back to the parent Package which I left out of the simplified code.

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >