Search Results

Search found 243 results on 10 pages for 'chicken soup'.

Page 1/10 | 1 2 3 4 5 6 7 8 9 10  | Next Page >

  • Garbled UI with vnc, chicken-of-the-vnc, gridengine, qmon

    - by bmargulies
    Ubuntu 11.04 gridengine version 6.2u5-1ubuntu1 I start a desktop with the vncserver command. My ~/.vnc/xstartup ends with 'gnome-session'. I connect to it using chicken-of-the-VNC on my MacOSX Lion system. I run 'qmon'. Much of qmon works, but several critical tasks show hopelessly garbled grid layouts. I have filed a bug report, but I suspect that there is something I am missing that would render (ahem) them legible.

    Read the article

  • Dealing with curly brace soup

    - by Cyborgx37
    I've programmed in both C# and VB.NET for years, but primarily in VB. I'm making a career shift toward C# and, overall, I like C# better. One issue I'm having, though, is curly brace soup. In VB, each structure keyword has a matching close keyword, for example: Namespace ... Class ... Function ... For ... Using ... If ... ... End If If ... ... End If End Using Next End Function End Class End Namespace The same code written in C# ends up very hard to read: namespace ... { class ... { function ... { for ... { using ... { if ... { ... } if ... { ... } } } // wait... what level is this? } } } Being so used to VB, I'm wondering if there's a technique employed by c-style programmers to improve readability and to ensure that your code ends up in the correct "block". The above example is relatively easy to read, but sometimes at the end of a piece of code I'll have 8 or more levels of curly braces, requiring me to scroll up several pages to figure out which brace ends the block I'm interested in.

    Read the article

  • Python beautiful soup arguments

    - by scott
    Hi I have this code that fetches some text from a page using BeautifulSoup soup= BeautifulSoup(html) body = soup.find('div' , {'id':'body'}) print body I would like to make this as a reusable function that takes in some htmltext and the tags to match it like the following def parse(html, atrs): soup= BeautifulSoup(html) body = soup.find(atrs) return body But if i make a call like this parse(htmlpage, ('div' , {'id':'body'}")) or like parse(htmlpage, ['div' , {'id':'body'}"]) I get only the div element, the body attribute seems to get ignored. Is there a way to fix this?

    Read the article

  • Beautiful Soup Unicode encode error

    - by iamrohitbanga
    I am trying the following code with a particular HTML file from BeautifulSoup import BeautifulSoup import re import codecs import sys f = open('test1.html') html = f.read() soup = BeautifulSoup(html) body = soup.body.contents para = soup.findAll('p') print str(para).encode('utf-8') I get the following error: UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position 9: ordinal not in range(128) How do I debug this?

    Read the article

  • Python Beautiful Soup .content Property

    - by Robert Birch
    What does BeautifulSoup's .content do? I am working through crummy.com's tutorial and I don't really understand what .content does. I have looked at the forums and I have not seen any answers. Looking at the code below.... from BeautifulSoup import BeautifulSoup import re doc = ['<html><head><title>Page title</title></head>', '<body><p id="firstpara" align="center">This is paragraph <b>one</b>.', '<p id="secondpara" align="blah">This is paragraph <b>two</b>.', '</html>'] soup = BeautifulSoup(''.join(doc)) print soup.contents[0].contents[0].contents[0].contents[0].name I would expect the last line of the code to print out 'body' instead of... File "pe_ratio.py", line 29, in <module> print soup.contents[0].contents[0].contents[0].contents[0].name File "C:\Python27\lib\BeautifulSoup.py", line 473, in __getattr__ raise AttributeError, "'%s' object has no attribute '%s'" % (self.__class__.__name__, attr) AttributeError: 'NavigableString' object has no attribute 'name' Is .content only concerned with html, head and title? If, so why is that? Thanks for the help in advance.

    Read the article

  • Building Android app from ant via Hudson - chicken and egg problem

    - by Eno
    When using an Android-generated ant build file, the file references your SDK installation via an sdk.dir property inside the local.properties files which is generated by "android update project -p .". The comments in build.xml suggest that local.properties should NOT be checked into version control. BUT, when you run your build from Hudson, it does a fresh checkout of your code from version control, hence local.properties does not exist and subsequently the build fails without sdk.dir being set. So its kind of chicken and egg problem. As a workaround I have checked local.properties into version control for now (nobody else will use it) but I was curious as to how other developers had tackled this problem ?

    Read the article

  • PList Chicken/Egg Scenario

    - by David van Dugteren
    Hi, I want to read/write to cache.plist If I want to read an existing premade plist file stored in the resources folder I can go: path = [[NSBundle mainBundle] bundlePath]; NSString *finalPath = [path stringByAppendingPathWithComponent@"cache.plist"]; NSMutableDictionary *root = ... But then I wish to read it from the iPhone. Can't, the Resources folder is only readable. So I need to use: NSDocumentDirectory, NSUserDomain,YES So how can I have my plist file preinstalled to the Document Directory location? Thus meaning I don't have to mess around with untidy code copying the plist file over at startup. (Unless that's the only way).

    Read the article

  • Find unique vertices from a 'triangle-soup'

    - by sum1stolemyname
    I am building a CAD-file converter on top of two libraries (Opencascade and DWF Toolkit). However, my question is plattform agnostic: Given: I have generated a mesh as a list of triangular faces form a model constructed through my application. Each Triangle is defined through three vertexes, which consist of three floats (x, y & z coordinate). Since the triangles form a mesh, most of the vertices are shared by more then one triangle. Goal: I need to find the list of unique vertices, and to generate an array of faces consisting of tuples of three indices in this list. What i want to do is this: //step 1: build a list of unique vertices for each triangle for each vertex in triangle if not vertex in listOfVertices Add vertex to listOfVertices //step 2: build a list of faces for each triangle for each vertex in triangle Get Vertex Index From listOfvertices AddToMap(vertex Index, triangle) While I do have an implementation which does this, step1 (the generation of the list of unique vertices) is really slow in the order of O(n!), since each vertex is compared to all vertices already in the list. I thought "Hey, lets build a hashmap of my vertices' components using std::map, that ought to speed things up!", only to find that generating a unique key from three floating point values is not a trivial task. Here, the experts of stackoverflow come into play: I need some kind of hash-function which works on 3 floats, or any other function generating a unique value from a 3d-vertex position.

    Read the article

  • urllib2.Request() with data returns empty url

    - by Mr. Polywhirl
    My main concern is the function: getUrlAndHtml() If I manually build and append the query to the end of the uri, I can get the response.url(), but if I pass a dictionary as the request data, the url does not come back. Is there anyway to guarantee the redirected url? In my example below, if thisWorks = True I get back a url, but the returned url is the request url as opposed to a redirect link. On a sidenote, the encoding for .E2.80.93 does not translate to - for some reason? #!/usr/bin/python import pprint import urllib import urllib2 from bs4 import BeautifulSoup from sys import argv URL = 'http://en.wikipedia.org/w/index.php?' def yesOrNo(boolVal): return 'yes' if boolVal else 'no' def getTitleFromRaw(page): return page.strip().replace(' ', '_') def getUrlAndHtml(title, printable=False): thisWorks = False if thisWorks: query = 'title={:s}&printable={:s}'.format(title, yesOrNo(printable)) opener = urllib2.build_opener() opener.addheaders = [('User-agent', 'Mozilla/5.0')] response = opener.open(URL + query) else: params = {'title':title,'printable':yesOrNo(printable)} data = urllib.urlencode(params) headers = {'User-agent':'Mozilla/5.0'}; request = urllib2.Request(URL, data, headers) response = urllib2.urlopen(request) return response.geturl(), response.read() def getSoup(html, name=None, attrs=None): soup = BeautifulSoup(html) if name is None: return None return soup.find(name, attrs) def setTitle(soup, newTitle): title = soup.find('div', {'id':'toctitle'}) h2 = title.find('h2') h2.contents[0].replaceWith('{:s} for {:s}'.format(h2.getText(), newTitle)) def updateLinks(soup, url): fragment = '#' for a in soup.findAll('a', href=True): a['href'] = a['href'].replace(fragment, url + fragment) def writeToFile(soup, filename='out.html', indentLevel=2): with open(filename, 'wt') as out: pp = pprint.PrettyPrinter(indent=indentLevel, stream=out) pp.pprint(soup) print('Wrote {:s} successfully.'.format(filename)) if __name__ == '__main__': def exitPgrm(): print('usage: {:s} "<PAGE>" <FILE>'.format(argv[0])) exit(0) if len(argv) == 2: help = argv[1] if help == '-h' or help == '--help': exitPgrm() if False:''' if not len(argv) == 3: exitPgrm() ''' page = 'Led Zeppelin' # argv[1] filename = 'test.html' # argv[2] title = getTitleFromRaw(page) url, html = getUrlAndHtml(title) soup = getSoup(html, 'div', {'id':'toc'}) setTitle(soup, page) updateLinks(soup, url) writeToFile(soup, filename)

    Read the article

  • Faith and Godaddy [closed]

    - by yuval
    Let's remove the capitalizations from the name GoDaddy - the hosting company: godaddy. Now let's look at the spacing a little differently: god addy, and capitalize: God Addy: God Addy: The address of God Did anybody notice this? How many negative votes will this question get? How long until someone closes the discussion on this question? All of this and more... NOW! Someone favorited this! woo! In addition to my serious and intelligent previous question: Who came first, the chicken or the egg?

    Read the article

  • How to convert Beautiful Soup Unicode into a decimal value?

    - by MikeTheCoder
    I'm trying to Use python's Beautiful Soup Library to grab a bunch of divs from an html file, and from there get the string - which is a money value - that's inside the div. Then remove the dollar sign and convert it to a decimal so that I can use a greater than and less than conditional statement to compare values. I have googled the heck out of it and can't seem to come up with a way to convert this unicode string into a decimal value. I really could use some help here. How do I convert unicode into a decimal value? This was my last attempt: import unicodedata from bs4 import BeautifulSoup soup = BeautifulSoup(open("/Users/sm/Documents/python/htmldemo.html")) for tag in soup.findAll("div",attrs={"itemprop":"price"}) : val = tag.string new_val = val[8:] workable = int(new_val) if workable > 250: print(type(workable)) else: print(type(workable)) Edit: When I print the type of new_val I get : print(type(new_val))

    Read the article

  • How to debug problems in Linux kernel module `init()`?

    - by Kimvais
    I am using remote (k)gdb to debug a problem in a module that causes a panic when loaded e.g. when init() is called. The stack trace just shows that do_one_initcall(mod->init) causes the crash. In order to get the symbol file loaded in the gdb, I need to get the address of the module text section, and to get that I need to get the module loaded. Because the insmod in busybox (1.16.1) doesn't support -m so I'm stuck to grep modulename /proc/modules + adding the offset from nm to figure out the address. So I'm facing a sort a of a chicken and an egg problem here - to be able to debug the module loading, I need to get the module loaded - but in order to get the module loaded, I need to debug the problem... So I am currently thinking about two options - is there a way to get the address information either: by printk() in the module init code by printk() somewhere in the kernel code all this prior to calling the mod->init() - so I could place a breakpoint there, load the symbol file, hit c and see it crash and burn...

    Read the article

  • 3 Scenarios for most relevant keywords in website. Which one is best?

    - by Sam
    A webpage about Tomato Soup has either of three following filenames: Scenario 1 website.org/en/tomato-soup or Scenario 2 website.org/en/tomato-soup-healthy-soups-recipes or Scenario 3 website.org/en/tomato-why-sandra-is-so-wild-about-her-healthy-tomato-soup-recipes Q1. Which one of the abobe would You go for? Q2. Which one of these would be ranked as most relevant by google? Q3. Would either of these be penalized for keyword stuffing?

    Read the article

  • Beautiful soup how print a tag while iterating over it .

    - by Bunny Rabbit
    <?xml version="1.0" encoding="UTF-8"?> <playlist version="1" xmlns="http://xspf.org/ns/0/"> <trackList> <track> <location>file:///home/ashu/Music/Collections/randomPicks/ipod%20on%20sep%2009/Coldplay-Sparks.mp3</location> <title>Coldplay-Sparks</title> </track> <track> <location>file:///home/ashu/Music/Collections/randomPicks/gud%201s/Coldplay%20Warning%20sign.mp3</location> <title>Coldplay Warning sign</title> </track>.... My xml looks like this , i want to get the locations, i am trying from BeautifulSoup import BeautifulSoup as bs soup = bs (the_above_xml_text) for track in soup.tracklist: print track.location.string but that is not working because i am getting AttributeError: 'NavigableString' object has no attribute 'location' how can i achive the result , thanks in advance.

    Read the article

  • Parsing HTML with Python 2.7 - HTMLParser, SGMLParser, or Beautiful Soup?

    - by Eric Wilson
    I want to do some screen-scraping with Python 2.7, and I have no context for the differences between HTMLParser, SGMLParser, or Beautiful Soup. Are these all trying to solve the same problem, or do they exist for different reasons? Which is simplest, which is most robust, and which (if any) is the default choice? Also, please let me know if I have overlooked a significant option. Edit: I should mention that I'm not particularly experienced in HTML parsing, and I'm particularly interested in which will get me moving the quickest, with the goal of parsing HTML on one particular site.

    Read the article

  • Any Suggestions on How to Soup Up/ Mod a MacBook Pro 13"?

    - by 5arx
    So I've got a mid-2009 MacBook Pro 13". Integrated GPU so not a games machine but fast enough for doing .Net development in VMs. I love the little thing and wanted to give it a Christmas present so thought I'd mod it up a bit and give it a boost. I'm probably going to go for a 500GB Seagate Momentus XT hybrid drive rather than full-on SSD (I need 500GB space) but was wondering if there are any other mods/tweaks people could suggest? I saw something online about swapping a HDD for the DVD drive and wondered if anyone had tried this or similarly drastic mods to the smallest of the MBPs. Cheers.

    Read the article

  • SVN Workflow - Chicken Before the Egg - Before merging V1 with V2, I need code from V1 to work on V2

    - by Jake
    Hi, Our distributed team (3 internal devs and 3+ external devs) use SVN to manage our codebase for a web site. We have a branch for each minor version (4.1.0, 4.1.1, 4.1.2, etc...). We have a trunk which we merge each version into when we do a release and publish to our site. An example of the problem we are having is thus: A new feature is added, lets call it "Ability to Create A Project" to 4.1.1. Another feature which depends on the one in 4.1.1 is scheduled to go in 4.1.2, called "Ability to Add Tasks to Projects". So, on Monday, we say 4.1.1 is 'closed' and needs to be tested. Our remote developers usually will start working on features/tickets for 4.1.2 at this point. Throughout the week we will test 4.1.1 and fix any bugs and commit them back to 4.1.1. Then, on Friday or so, we will tag 4.1.1, merge it with trunk, and finally, merge it with 4.1.2. But, for the 4-5 days we are testing, 4.1.2 doesn't have the code from 4.1.1 that some of the new features for 4.1.2 depend on. So a dev who is adding the "Ability to Add Tasks To Projects" feature, doesn't have the "Ability to Create a Project" feature to build upon, and has to do some file copy shenanigans to be able to keep working on it. What could/should we do to smooth out this process? P.S. Apologies if this question has been asked before - I did search but couldn't find what I'm looking for.

    Read the article

  • Chicken and egg problem (restore database) when trying to write unit test against SQl Server 2008.

    - by Hamish Grubijan
    Ok, they are not unit tests but end-to-end tests. The setup is somewhat involved. Unit tests will use C#, ODBC connection. Every unit tests will try to clean up after itself, but every 20 tests or so (once per C# class) we would need to do a full database restore. I do not think I can do it over an ODBC connection, according to this document: http://www.sql-server-performance.com/articles/dba/Obtain_Exclusive_Access_to_Restore_SQL_Server_p1.aspx Msg 6104, Level 16, State 1, Line 1 Cannot use KILL to kill your own process. However, I would like to, so that 199 tests do not go amok because of a bad clean-up. Is there another way? Perhaps I can open a different "connection" such as use COM automation or something of that sort, and then kill all database connections from there? If so, how can I do that? Also, will the clients be able to re-connect automatically after a restore, or would I have to dismantle everything once every 20 tests or so? If you find this question confusing, please let me know what your questions are. Thanks!

    Read the article

  • BeautifulSoup HTMLParseError. What's wrong with this?

    - by user1915496
    This is my code: from bs4 import BeautifulSoup as BS import urllib2 url = "http://services.runescape.com/m=news/recruit-a-friend-for-free-membership-and-xp" res = urllib2.urlopen(url) soup = BS(res.read()) other_content = soup.find_all('div',{'class':'Content'})[0] print other_content Yet an error comes up: /Library/Python/2.7/site-packages/bs4/builder/_htmlparser.py:149: RuntimeWarning: Python's built-in HTMLParser cannot parse the given document. This is not a bug in Beautiful Soup. The best solution is to install an external parser (lxml or html5lib), and use Beautiful Soup with that parser. See http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser for help. "Python's built-in HTMLParser cannot parse the given document. This is not a bug in Beautiful Soup. The best solution is to install an external parser (lxml or html5lib), and use Beautiful Soup with that parser. See http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser for help.")) Traceback (most recent call last): File "web.py", line 5, in <module> soup = BS(res.read()) File "/Library/Python/2.7/site-packages/bs4/__init__.py", line 172, in __init__ self._feed() File "/Library/Python/2.7/site-packages/bs4/__init__.py", line 185, in _feed self.builder.feed(self.markup) File "/Library/Python/2.7/site-packages/bs4/builder/_htmlparser.py", line 150, in feed raise e I've let two other people use this code, and it works for them perfectly fine. Why is it not working for me? I have bs4 installed...

    Read the article

  • In Python BeautifulSoup How to move tags

    - by JJ
    I have a partially converted XML document in soup coming from HTML. After some replacement and editing in the soup, the body is essentially - <Text...></Text> # This replaces <a href..> tags but automatically creates the </Text> <p class=norm ...</p> <p class=norm ...</p> <Text...></Text> <p class=norm ...</p> and so forth. I need to "move" the <p> tags to be children to <Text> or know how to suppress the </Text>. I want - <Text...> <p class=norm ...</p> <p class=norm ...</p> </Text> <Text...> <p class=norm ...</p> </Text> I've tried using item.insert and item.append but I'm thinking there must be a more elegant solution. for item in soup.findAll(['p','span']): if item.name == 'span' and item.has_key('class') and item['class'] == 'section': xBCV = short_2_long(item._getAttrMap().get('value','')) if currentnode: pass currentnode = Tag(soup,'Text', attrs=[('TypeOf', 'Section'),... ]) item.replaceWith(currentnode) # works but creates end tag elif item.name == 'p' and item.has_key('class') and item['class'] == 'norm': childcdatanode = None for ahref in item.findAll('a'): if childcdatanode: pass newlink = filter_hrefs(str(ahref)) childcdatanode = Tag(soup, newlink) ahref.replaceWith(childcdatanode) Thanks

    Read the article

  • BeautifulSoup can't parse a webpage?

    - by JLTChiu
    I am using beautiful soup for parsing webpage now, I've heard it's very famous and good, but it doesn't seems works properly. Here's what I did import urllib2 from bs4 import BeautifulSoup page = urllib2.urlopen("http://www.cnn.com/2012/10/14/us/skydiver-record-attempt/index.html?hpt=hp_t1") soup = BeautifulSoup(page) print soup.prettify() I think this is kind of straightforward. I open the webpage and pass it to the beautifulsoup. But here's what I got: Warning (from warnings module): File "C:\Python27\lib\site-packages\bs4\builder\_htmlparser.py", line 149 "Python's built-in HTMLParser cannot parse the given document. This is not a bug in Beautiful Soup. The best solution is to install an external parser (lxml or html5lib), and use Beautiful Soup with that parser. See http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser for help.")) ... HTMLParseError: bad end tag: u'</"+"script>', at line 634, column 94 I thought CNN website should be well designed, so I am not very sure what's going on though. Does anyone has idea about this?

    Read the article

  • beautifulsoup can't find exist href in file

    - by young001
    I have a html file like following: <form action="/2811457/follow?gsid=3_5bce9b871484d3af90c89f37" method="post"> <div> <a href="/2811457/follow?page=2&amp;gsid=3_5bce9b871484d3af90c89f37">next_page</a> &nbsp;<input name="mp" type="hidden" value="3" /> <input type="text" name="page" size="2" style='-wap-input-format: "*N"' /> <input type="submit" value="jump" />&nbsp;1/3 </div> </form> how to extract the "1/3" from the file? It is a part of html,I intend to make it clear. When I use beautifulsoup, I'm new to beautifulsoup,and I have look the document,but still confused. how to extract"1/3" from the html file? total_urls_num = soup.find(re.compile('.*/d\//d.*')) doesn't work As JBernardo said,\d should be a number,When I change to .*\d/\d.*,it doesn't work too. my code: from BeautifulSoup import BeautifulSoup import re with open("html.txt","r") as f: response = f.read() print response soup = BeautifulSoup(response) delete_urls = soup.findAll('a', href=re.compile('follow\?page')) #works print delete_urls #total_urls_num = soup.find(re.compile('.*\d/\d.*')) total_urls_num = soup.find('input',style='submit') #can't work print total_urls_num

    Read the article

  • Python Continue Loop

    - by Rob B.
    I am using the following code from this tutorial (http://jeriwieringa.com/blog/2012/11/04/beautiful-soup-tutorial-part-1/). from bs4 import BeautifulSoup soup = BeautifulSoup (open("43rd-congress.html")) final_link = soup.p.a final_link.decompose() trs = soup.find_all('tr') for tr in trs: for link in tr.find_all('a'): fulllink = link.get ('href') print fulllink #print in terminal to verify results tds = tr.find_all("td") try: #we are using "try" because the table is not well formatted. This allows the program to continue after encountering an error. names = str(tds[0].get_text()) # This structure isolate the item by its column in the table and converts it into a string. years = str(tds[1].get_text()) positions = str(tds[2].get_text()) parties = str(tds[3].get_text()) states = str(tds[4].get_text()) congress = tds[5].get_text() except: print "bad tr string" continue #This tells the computer to move on to the next item after it encounters an error print names, years, positions, parties, states, congress However, I get an error saying that 'continue' is not properly in the loop on line 27. I am using notepad++ and windows powershell. How do I make this code work?

    Read the article

  • How to generate DELETE statements in PL/SQL, based on the tables FK relations?

    - by The chicken in the kitchen
    Is it possible via script/tool to generate authomatically many delete statements based on the tables fk relations, using Oracle PL/SQL? In example: I have the table: CHICKEN (CHICKEN_CODE NUMBER) and there are 30 tables with fk references to its CHICKEN_CODE that I need to delete; there are also other 150 tables foreign-key-linked to that 30 tables that I need to delete first. Is there some tool/script PL/SQL that I can run in order to generate all the necessary delete statements based on the FK relations for me? (by the way, I know about cascade delete on the relations, but please pay attention: I CAN'T USE IT IN MY PRODUCTION DATABASE, because it's dangerous!) I'm using Oracle DataBase 10G R2. This is the result I've written, but it is not recursive: This is a view I have previously written, but of course it is not recursive! CREATE OR REPLACE FORCE VIEW RUN ( OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, VINCOLO ) AS SELECT OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, '(' || LTRIM ( EXTRACT (XMLAGG (XMLELEMENT ("x", ',' || COLUMN_NAME)), '/x/text()'), ',') || ')' VINCOLO FROM ( SELECT CON1.OWNER OWNER_1, CON1.TABLE_NAME TABLE_NAME_1, CON1.CONSTRAINT_NAME CONSTRAINT_NAME_1, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME FROM DBA_CONSTRAINTS CON, DBA_CONS_COLUMNS COL, DBA_CONSTRAINTS CON1 WHERE CON.OWNER = 'TABLE_OWNER' AND CON.TABLE_NAME = 'TABLE_OWNED' AND ( (CON.CONSTRAINT_TYPE = 'P') OR (CON.CONSTRAINT_TYPE = 'U')) AND COL.TABLE_NAME = CON1.TABLE_NAME AND COL.CONSTRAINT_NAME = CON1.CONSTRAINT_NAME --AND CON1.OWNER = CON.OWNER AND CON1.R_CONSTRAINT_NAME = CON.CONSTRAINT_NAME AND CON1.CONSTRAINT_TYPE = 'R' GROUP BY CON1.OWNER, CON1.TABLE_NAME, CON1.CONSTRAINT_NAME, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME) GROUP BY OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME; ... and it contains the error of using DBA_CONSTRAINTS instead of ALL_CONSTRAINTS... Please pay attention to this: http://stackoverflow.com/questions/485581/generate-delete-statement-from-foreign-key-relationships-in-sql-2008/2677145#2677145 Another user has just written it in SQL SERVER 2008, anyone is able to convert to Oracle 10G PL/SQL? I am not able to... :-( This is the code written by another user in SQL SERVER 2008: DECLARE @COLUMN_NAME AS sysname DECLARE @TABLE_NAME AS sysname DECLARE @IDValue AS int SET @COLUMN_NAME = '<Your COLUMN_NAME here>' SET @TABLE_NAME = '<Your TABLE_NAME here>' SET @IDValue = 123456789 DECLARE @sql AS varchar(max) ; WITH RELATED_COLUMNS AS ( SELECT QUOTENAME(c.TABLE_SCHEMA) + '.' + QUOTENAME(c.TABLE_NAME) AS [OBJECT_NAME] ,c.COLUMN_NAME FROM PBANKDW.INFORMATION_SCHEMA.COLUMNS AS c WITH (NOLOCK) INNER JOIN PBANKDW.INFORMATION_SCHEMA.TABLES AS t WITH (NOLOCK) ON c.TABLE_CATALOG = t.TABLE_CATALOG AND c.TABLE_SCHEMA = t.TABLE_SCHEMA AND c.TABLE_NAME = t.TABLE_NAME AND t.TABLE_TYPE = 'BASE TABLE' INNER JOIN ( SELECT rc.CONSTRAINT_CATALOG ,rc.CONSTRAINT_SCHEMA ,lkc.TABLE_NAME ,lkc.COLUMN_NAME FROM PBANKDW.INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS rc WITH (NOLOCK) INNER JOIN PBANKDW.INFORMATION_SCHEMA.KEY_COLUMN_USAGE lkc WITH (NOLOCK) ON lkc.CONSTRAINT_CATALOG = rc.CONSTRAINT_CATALOG AND lkc.CONSTRAINT_SCHEMA = rc.CONSTRAINT_SCHEMA AND lkc.CONSTRAINT_NAME = rc.CONSTRAINT_NAME INNER JOIN PBANKDW.INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc WITH (NOLOCK) ON rc.CONSTRAINT_CATALOG = tc.CONSTRAINT_CATALOG AND rc.CONSTRAINT_SCHEMA = tc.CONSTRAINT_SCHEMA AND rc.UNIQUE_CONSTRAINT_NAME = tc.CONSTRAINT_NAME INNER JOIN PBANKDW.INFORMATION_SCHEMA.KEY_COLUMN_USAGE rkc WITH (NOLOCK) ON rkc.CONSTRAINT_CATALOG = tc.CONSTRAINT_CATALOG AND rkc.CONSTRAINT_SCHEMA = tc.CONSTRAINT_SCHEMA AND rkc.CONSTRAINT_NAME = tc.CONSTRAINT_NAME WHERE rkc.COLUMN_NAME = @COLUMN_NAME AND rkc.TABLE_NAME = @TABLE_NAME ) AS j ON j.CONSTRAINT_CATALOG = c.TABLE_CATALOG AND j.CONSTRAINT_SCHEMA = c.TABLE_SCHEMA AND j.TABLE_NAME = c.TABLE_NAME AND j.COLUMN_NAME = c.COLUMN_NAME ) SELECT @sql = COALESCE(@sql, '') + 'DELETE FROM ' + [OBJECT_NAME] + ' WHERE ' + [COLUMN_NAME] + ' = ' + CONVERT(varchar, @IDValue) + CHAR(13) + CHAR(10) FROM RELATED_COLUMNS PRINT @sql Thank to Charles, this is the latest not working release of the software, I have added a parameter with the OWNER because the referential integrities propagate through about 5 other Oracle users (!!!): CREATE OR REPLACE PROCEDURE delete_cascade ( parent_table VARCHAR2, parent_table_owner VARCHAR2) IS cons_name VARCHAR2 (30); tab_name VARCHAR2 (30); tab_name_owner VARCHAR2 (30); parent_cons VARCHAR2 (30); parent_col VARCHAR2 (30); delete1 VARCHAR (500); delete2 VARCHAR (500); delete_command VARCHAR (4000); CURSOR cons_cursor IS SELECT constraint_name, r_constraint_name, table_name, constraint_type FROM all_constraints WHERE constraint_type = 'R' AND r_constraint_name IN (SELECT constraint_name FROM all_constraints WHERE constraint_type IN ('P', 'U') AND table_name = parent_table AND owner = parent_table_owner) AND delete_rule = 'NO ACTION'; CURSOR tabs_cursor IS SELECT DISTINCT table_name FROM all_cons_columns WHERE constraint_name = cons_name; CURSOR child_cols_cursor IS SELECT column_name, position FROM all_cons_columns WHERE constraint_name = cons_name AND table_name = tab_name; BEGIN FOR cons IN cons_cursor LOOP cons_name := cons.constraint_name; parent_cons := cons.r_constraint_name; SELECT DISTINCT table_name, owner INTO tab_name, tab_name_owner FROM all_cons_columns WHERE constraint_name = cons_name; delete_cascade (tab_name, tab_name_owner); delete_command := ''; delete1 := ''; delete2 := ''; FOR col IN child_cols_cursor LOOP SELECT DISTINCT column_name INTO parent_col FROM all_cons_columns WHERE constraint_name = parent_cons AND position = col.position; IF delete1 IS NULL THEN delete1 := col.column_name; ELSE delete1 := delete1 || ', ' || col.column_name; END IF; IF delete2 IS NULL THEN delete2 := parent_col; ELSE delete2 := delete2 || ', ' || parent_col; END IF; END LOOP; delete_command := 'delete from ' || tab_name_owner || '.' || tab_name || ' where (' || delete1 || ') in (select ' || delete2 || ' from ' || parent_table_owner || '.' || parent_table || ');'; INSERT INTO ris VALUES (SEQUENCE_COMANDI.NEXTVAL, delete_command); COMMIT; END LOOP; END; / In the cursor CONS_CURSOR I have added the condition: AND delete_rule = 'NO ACTION'; in order to avoid deletion in case of referential integrities with DELETE_RULE = 'CASCADE' or DELETE_RULE = 'SET NULL'. Now I have tried to turn from stored procedure to stored function, but the delete statements are not correct: CREATE OR REPLACE FUNCTION deletecascade ( parent_table VARCHAR2, parent_table_owner VARCHAR2) RETURN VARCHAR2 IS cons_name VARCHAR2 (30); tab_name VARCHAR2 (30); tab_name_owner VARCHAR2 (30); parent_cons VARCHAR2 (30); parent_col VARCHAR2 (30); delete1 VARCHAR (500); delete2 VARCHAR (500); delete_command VARCHAR (4000); AT_LEAST_ONE_ITERATION NUMBER DEFAULT 0; CURSOR cons_cursor IS SELECT constraint_name, r_constraint_name, table_name, constraint_type FROM all_constraints WHERE constraint_type = 'R' AND r_constraint_name IN (SELECT constraint_name FROM all_constraints WHERE constraint_type IN ('P', 'U') AND table_name = parent_table AND owner = parent_table_owner) AND delete_rule = 'NO ACTION'; CURSOR tabs_cursor IS SELECT DISTINCT table_name FROM all_cons_columns WHERE constraint_name = cons_name; CURSOR child_cols_cursor IS SELECT column_name, position FROM all_cons_columns WHERE constraint_name = cons_name AND table_name = tab_name; BEGIN FOR cons IN cons_cursor LOOP AT_LEAST_ONE_ITERATION := 1; cons_name := cons.constraint_name; parent_cons := cons.r_constraint_name; SELECT DISTINCT table_name, owner INTO tab_name, tab_name_owner FROM all_cons_columns WHERE constraint_name = cons_name; delete1 := ''; delete2 := ''; FOR col IN child_cols_cursor LOOP SELECT DISTINCT column_name INTO parent_col FROM all_cons_columns WHERE constraint_name = parent_cons AND position = col.position; IF delete1 IS NULL THEN delete1 := col.column_name; ELSE delete1 := delete1 || ', ' || col.column_name; END IF; IF delete2 IS NULL THEN delete2 := parent_col; ELSE delete2 := delete2 || ', ' || parent_col; END IF; END LOOP; delete_command := 'delete from ' || tab_name_owner || '.' || tab_name || ' where (' || delete1 || ') in (select ' || delete2 || ' from ' || parent_table_owner || '.' || parent_table || ');' || deletecascade (tab_name, tab_name_owner); INSERT INTO ris VALUES (SEQUENCE_COMANDI.NEXTVAL, delete_command); COMMIT; END LOOP; IF AT_LEAST_ONE_ITERATION = 1 THEN RETURN ' where COD_CHICKEN = V_CHICKEN AND COD_NATION = V_NATION;'; ELSE RETURN NULL; END IF; END; / Please assume that V_CHICKEN and V_NATION are the criteria to select the CHICKEN to delete from the root table: the condition is: "where COD_CHICKEN = V_CHICKEN AND COD_NATION = V_NATION" on the root table.

    Read the article

1 2 3 4 5 6 7 8 9 10  | Next Page >