Search Results

Search found 22463 results on 899 pages for 'sub query'.

Page 507/899 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • Google shows subdomain of main site instead of add on domain URL [closed]

    - by Welsher
    I have my host (lunarpages) set up with a few add on domains to my main account. These show up as sub-domains of my main account, but they can be reached by using the new domain I've created. So: subdomain1.domain.com -- www.mynewsite.com subdomain2.domain.com -- www.myothersite.com etc. The problem is, mynewsite.com shows up in google with that domain, but myothersite.com shows up with subdomain2.domain.com. I don't have a clue what might be causing this to happen. If anyone has an advice or can point me in the right direction, I'd really appreciate it! Thanks.

    Read the article

  • nhibernate will not cascade delete childs

    - by marn
    The scenario is as follows, I have 3 objects (i simplified the names) named Parent, parent's child & child's child parent's child is a set in parent, and child's child is a set in child. mapping is as follows (relevant parts) parent <set name="parentset" table="pc-table" lazy="false" fetch="subselect" cascade="all-delete-orphan" inverse="true"> <key column=FK_ID_PC" on-delete="cascade"/> <one-to-many class="parentchild,parentchild-ns"/> </set> parent's child <set name="childset" table="cc-table" lazy="false" fetch="subselect" cascade="all-delete-orphan" inverse="true"> <key column="FK_ID_CC" on-delete="cascade"/> <one-to-many class="childschild,childschild-ns"/> </set> What i want to achieve is that when i delete the parent, there would be a cascade delete all the way trough to child's child. But what currently happens is this. (this is purely for mapping test purposes) getting a parent entity (works fine) IQuery query = session.CreateQuery("from Parent where ID =" + ID); IParent doc = query.UniqueResult<Parent>(); now the delete part session.Delete(doc); transaction.Commit(); After having solved the 'cannot insert null value' error with cascading and inverse i hopes this would now delete everything with this code, but only the parent is being deleted. Did i miss something in my mapping which is likely to be missed? Any hint in the right direction is more than welcome!

    Read the article

  • C# Design How to Elegantly wrap a DAL class

    - by guazz
    I have an application which uses MyGeneration's dOODads ORM to generate it's Data Access Layer. dOODad works by generating a persistance class for each table in the database. It works like so: // Load and Save Employees emps = new Employees(); if(emps.LoadByPrimaryKey(42)) { emps.LastName = "Just Got Married"; emps.Save(); } // Add a new record Employees emps = new Employees(); emps.AddNew(); emps.FirstName = "Mr."; emps.LastName = "dOOdad"; emps.Save(); // After save the identity column is already here for me. int i = emps.EmployeeID; // Dynamic Query - All Employees with 'A' in thier last name Employees emps = new Employees(); emps.Where.LastName.Value = "%A%"; emps.Where.LastName.Operator = WhereParameter.Operand.Like; emps.Query.Load(); For the above example(i.e. Employees DAL object) I would like to know what is the best method/technique to abstract some of the implementation details on my classes. I don't believe that an Employee class should have Employees(the DAL) specifics in its methods - or perhaps this is acceptable? Is it possible to implement some form of repository pattern? Bear in mind that this is a high volume, perfomacne critical application. Thanks, j

    Read the article

  • Daisy Chain a VNC

    - by mastercork889
    Part 1. Theoretically, would daisy chaining a VNC work? EG, If I wanted to VNC into a specific computer on a different network could I VNC into a computer onto that network and ultimately get into the computer I need? So computers A, B, C, D, and E, are all on different networks from one another. I'm at A, could I VNC into B, then C, then D, then E. I understand that I can just VNC into E, but this question is a theoretical question. Part 2. Could I do a sub-looped daisy chain with VNCs? EG, A B C D C B E? Could that theoretically work, or would it screw up somewhere along the trail?

    Read the article

  • Store comparison in variable (or execute comparison when it's given as an string)

    - by BorrajaX
    Hello everyone. I'd like to know if the super-powerful python allows to store a comparison in a variable or, if not, if it's possible calling/executing a comparison when given as an string ("==" or "!=") I want to allow the users of my program the chance of giving a comparison in an string. For instance, let's say I have a list of... "products" and the user wants to select the products whose manufacturer is "foo". He could would input something like: Product.manufacturer == "foo" and if the user wants the products whose manufacturer is not "bar" he would input Product.manufacturer != "bar" If the user inputs that line as an string, I create a tree with an structure like: != / \ manufacturer bar I'd like to allow that comparison to run properly, but I don't know how to make it happen if != is an string. The "manufacturer" field is a property, so I can properly get it from the Product class and store it (as a property) in the leaf, and well... "bar" is just an string. I'd like to know if I can something similar to what I do with "manufacturer": storing it with a 'callable" (kind of) thing: the property with the comparator: != I have tried with "eval" and it may work, but the comparisons are going to be actually used to query a MySQL database (using sqlalchemy) and I'm a bit concerned about the security of that... Any idea will be deeply appreciated. Thank you! PS: The idea of all this is being able to generate a sqlalchemy query, so if the user inputs the string: Product.manufacturer != "foo" || Product.manufacturer != "bar" ... my tree thing can generate the following: sqlalchemy.or_(Product.manufacturer !="foo", Product.manufacturer !="bar") Since sqlalchemy.or_ is callable, I can also store it in one of the leaves... I only see a problem with the "!="

    Read the article

  • Sphinx PHP Geodist problems

    - by James
    Below is my PHP function to call nearby points using Sphinx for MySQL. There are hundreds of thousands of nearby points which to what I can tell, are being indexed by Sphinx, but simply fails silently when searching with Sphinx. Other Sphinx queries I run against other indexes work completely fine. function nearby($latitude, $longitude, $radius) { global $sphinx; $sphinx->SetMatchMode(SPH_MATCH_ALL); $sphinx->SetArrayResult(true); $sphinx->SetLimits(0, 1000); $sphinx->SetGeoAnchor('latitude', 'longitude', (float) deg2rad($latitude), (float) deg2rad($longitude)); $circle = (float) $radius * 1.609344; $sphinx->SetFilterFloatRange('@geodist', 0.0, $circle); $matches = $sphinx->Query('', 'geo'); return $matches; } $nearby = nearby($latitude, $longitude, 10000); var_dump($nearby); This, when called with valid latitude and longitude co-ords and a very large radius for debugging produces: bool(false) Below is my sphinx.conf covering the geo part: source geo { type = mysql sql_host = 127.0.0.1 sql_user = user sql_pass = pass sql_db = db sql_port = 3306 sql_query_pre = set names utf8 sql_query_pre = set session query_cache_type=OFF sql_query = SELECT id,city,region,country,radians(longitude) AS longitude, radians(latitude) AS latitude FROM points; sql_attr_float = longitude sql_attr_float = latitude sql_ranged_throttle = 0 sql_query_info = SELECT * FROM points WHERE id = $id } index geo { source = geo path = /var/data/geo docinfo = extern #mlock = 0 #morphology = none min_word_len = 1 charset_type = utf-8 #charset_table = 0..9, A..Z->a..z, _, a..z, U+410..U+42F->U+430..U+44F, U+430..U+44F ignore_chars = U+00AD html_strip = 0 enable_star = 0 } indexer { mem_limit = 1024M } searchd { port = 3312 log = /var/log/searchd.log query_log = /var/log/query.log read_timeout = 5 max_children = 5 pid_file = /var/log/searchd.pid max_matches = 10000 seamless_rotate = 1 preopen_indexes = 0 unlink_old = 1 }

    Read the article

  • How is WPF Data Binding using Object Data Source in Visual Studio 2010 done?

    - by Rob Perkins
    This is probably mostly a question about how to use the VS 2010 IDE tools in a way the Microsofties didn't specifically intend. But since this is something I immediately tried without success. I have defined a .NET 4.0 WPF Application project with a simple class that looks like this: Public Class Class1 Public Property One As String = "OneString" Public Property Two As String = "TwoString" End Class I then defined it as an "Object Data Source" in VS2010, using the IDE's "Add New Data Source..." feature. This exposes the class members in a GUI element in the IDE as given in this image: Dragging "Class1" from that tool to the surface of "Window1.xaml" in a default "WPF Application" results in the design view looking like this: And generated XAML like this: <Window x:Class="Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="133" Width="170" xmlns:my="clr-namespace:WpfApplication1" mc:Ignorable="d" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" > <Window.Resources> <CollectionViewSource x:Key="Class1ViewSource" d:DesignSource="{d:DesignInstance my:Class1, CreateList=True}" /> </Window.Resources> <Grid DataContext="{StaticResource Class1ViewSource}" HorizontalAlignment="Left" Name="Grid1" VerticalAlignment="Top"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="Auto" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Label Content="One:" Grid.Column="0" Grid.Row="0" HorizontalAlignment="Left" Margin="3" VerticalAlignment="Center" /> <TextBlock Grid.Column="1" Grid.Row="0" Height="23" HorizontalAlignment="Left" Margin="3" Name="OneTextBlock" Text="{Binding Path=One}" VerticalAlignment="Center" /> <Label Content="Two:" Grid.Column="0" Grid.Row="1" HorizontalAlignment="Left" Margin="3" VerticalAlignment="Center" /> <TextBlock Grid.Column="1" Grid.Row="1" Height="23" HorizontalAlignment="Left" Margin="3" Name="TwoTextBlock" Text="{Binding Path=Two}" VerticalAlignment="Center" /> </Grid> Note the data bindings Text="{Binding Path=One}" and Text="{Binding Path=Two}" in the TextBlock elements. Code-behind for Window1.xaml has this in Window_Loaded: Class Window1 Private m_c1 As New Class1 Private Sub Window1_Loaded(ByVal sender As Object, ByVal e As System.Windows.RoutedEventArgs) Handles Me.Loaded Dim Class1ViewSource As System.Windows.Data.CollectionViewSource = CType(Me.FindResource("Class1ViewSource"), System.Windows.Data.CollectionViewSource) 'Load data by setting the CollectionViewSource.Source property: 'Class1ViewSource.Source = [generic data source] Me.DataContext = m_c1 End Sub End Class Running the application produces this output: The expected result was that "OneString" would appear next to "One" and "TwoString" next to "Two" in the running window. The question is: Why didn't this work? What will work instead? If I put bindings in a DataTemplate, it works. Blend, with its sample data stuff, implied that this should work, but it doesn't. I know I'm missing something pretty fundamental here; what is it?

    Read the article

  • How do I get SSIS Data Flow to put '0.00' in a flat file?

    - by theog
    I have an SSIS package with a Data Flow that takes an ADO.NET data source (just a small table), executes a select * query, and outputs the query results to a flat file (I've also tried just pulling the whole table and not using a SQL select). The problem is that the data source pulls a column that is a Money datatype, and if the value is not zero, it comes into the text flat file just fine (like '123.45'), but when the value is zero, it shows up in the destination flat file as '.00'. I need to know how to get the leading zero back into the flat file. I've tried various datatypes for the output (in the Flat File Connection Manager), including currency and string, but this seems to have no effect. I've tried a case statement in my select, like this: CASE WHEN columnValue = 0 THEN '0.00' ELSE columnValue END (still results in '.00') I've tried variations on that like this: CASE WHEN columnValue = 0 THEN convert(decimal(12,2), '0.00') ELSE convert(decimal(12,2), columnValue) END (Still results in '.00') and: CASE WHEN columnValue = 0 THEN convert(money, '0.00') ELSE convert(money, columnValue) END (results in '.0000000000000000000') This silly little issue is killin' me. Can anybody tell me how to get a zero Money datatype database value into a flat file as '0.00'?

    Read the article

  • Filemaker to SQL Server via SSIS

    - by TexasT
    I'm using SSIS and trying to import data from Filelmaker into SQL Server. In the Solution Explorer, I right click on "SSIS Packages" and select SQL Server Import and Export Wizard". During the process, I use my DSN as the source, SQL Server as the destination, use a valid query to pull data from Filemaker, and set the mappings. Each time I try to run the package, I receive the following message: The "output column "LastNameFirst" (12)" has a length that is not valide. The length must be between 0 and 4000. I do not understand this error exactly, but in the documentation for ODBC: http://www.filemaker.com/downloads/pdf/fm9%5Fodbc%5Fjdbc%5Fguide%5Fen.pdf (page 47) it states: "The maximum column length of text is 1 million characters, unless you specify a smaller Maximum number of characters for the text field in FileMaker. FileMaker returns empty strings as NULL." I'm thinking that the data type is too large when trying to convert it to varchar. But even after using a query of SUBSTR(LastNameFirst, 1, 2000), I get the same error. Any suggestions?

    Read the article

  • Cannot create Java VM on OpenVZ

    - by Stephen Searles
    I'm constantly encountering an error related to Java and certificates on my Ubuntu server running in OpenVZ when installing things from apt-get. I'm pretty sure it has to do with how Java allocates memory. I know the fail counter for privvmpages is very high, so the problem must be that Java is hitting this limit. I have read that the server VM will allocate a lot of memory up front to preempt performance issues, but that the client VM doesn't do this and might be better for what I'm doing. I messed with jvm.cfg to make the system go to the client VM, but get an error that it can't find the client VM. I have tried replacing the Java binary with a script calling Java with -Xms and -Xmx settings, and that solves the issue for when I call basic things from the command line, but not for when doing things like having apt-get configure certificates. I'm at a loss for what to try next. I need to get this working, but simply increasing privvmpages is not an available option. I have the actual error pasted below. Setting up ca-certificates-java (20100412) ... creating /etc/ssl/certs/java/cacerts... Could not create the Java virtual machine. error adding brasil.gov.br/brasil.gov.br.crt error adding cacert.org/cacert.org.crt error adding debconf.org/ca.crt error adding gouv.fr/cert_igca_dsa.crt error adding gouv.fr/cert_igca_rsa.crt error adding mozilla/ABAecom_=sub.__Am._Bankers_Assn.=_Root_CA.crt error adding mozilla/AOL_Time_Warner_Root_Certification_Authority_1.crt error adding mozilla/AOL_Time_Warner_Root_Certification_Authority_2.crt error adding mozilla/AddTrust_External_Root.crt error adding mozilla/AddTrust_Low-Value_Services_Root.crt error adding mozilla/AddTrust_Public_Services_Root.crt error adding mozilla/AddTrust_Qualified_Certificates_Root.crt error adding mozilla/America_Online_Root_Certification_Authority_1.crt error adding mozilla/America_Online_Root_Certification_Authority_2.crt error adding mozilla/Baltimore_CyberTrust_Root.crt error adding mozilla/COMODO_Certification_Authority.crt error adding mozilla/COMODO_ECC_Certification_Authority.crt error adding mozilla/Camerfirma_Chambers_of_Commerce_Root.crt error adding mozilla/Camerfirma_Global_Chambersign_Root.crt error adding mozilla/Certplus_Class_2_Primary_CA.crt error adding mozilla/Certum_Root_CA.crt error adding mozilla/Comodo_AAA_Services_root.crt error adding mozilla/Comodo_Secure_Services_root.crt error adding mozilla/Comodo_Trusted_Services_root.crt error adding mozilla/DST_ACES_CA_X6.crt error adding mozilla/DST_Root_CA_X3.crt error adding mozilla/DigiCert_Assured_ID_Root_CA.crt error adding mozilla/DigiCert_Global_Root_CA.crt error adding mozilla/DigiCert_High_Assurance_EV_Root_CA.crt Could not create the Java virtual machine. error adding mozilla/Digital_Signature_Trust_Co._Global_CA_1.crt error adding mozilla/Digital_Signature_Trust_Co._Global_CA_2.crt error adding mozilla/Digital_Signature_Trust_Co._Global_CA_3.crt error adding mozilla/Digital_Signature_Trust_Co._Global_CA_4.crt error adding mozilla/Entrust.net_Global_Secure_Personal_CA.crt error adding mozilla/Entrust.net_Global_Secure_Server_CA.crt error adding mozilla/Entrust.net_Premium_2048_Secure_Server_CA.crt error adding mozilla/Entrust.net_Secure_Personal_CA.crt error adding mozilla/Entrust.net_Secure_Server_CA.crt error adding mozilla/Entrust_Root_Certification_Authority.crt error adding mozilla/Equifax_Secure_CA.crt error adding mozilla/Equifax_Secure_Global_eBusiness_CA.crt error adding mozilla/Equifax_Secure_eBusiness_CA_1.crt error adding mozilla/Equifax_Secure_eBusiness_CA_2.crt error adding mozilla/Firmaprofesional_Root_CA.crt error adding mozilla/GTE_CyberTrust_Global_Root.crt error adding mozilla/GTE_CyberTrust_Root_CA.crt error adding mozilla/GeoTrust_Global_CA.crt error adding mozilla/GeoTrust_Global_CA_2.crt error adding mozilla/GeoTrust_Primary_Certification_Authority.crt error adding mozilla/GeoTrust_Universal_CA.crt error adding mozilla/GeoTrust_Universal_CA_2.crt error adding mozilla/GlobalSign_Root_CA.crt error adding mozilla/GlobalSign_Root_CA_-_R2.crt error adding mozilla/Go_Daddy_Class_2_CA.crt error adding mozilla/IPS_CLASE1_root.crt error adding mozilla/IPS_CLASE3_root.crt error adding mozilla/IPS_CLASEA1_root.crt error adding mozilla/IPS_CLASEA3_root.crt error adding mozilla/IPS_Chained_CAs_root.crt error adding mozilla/IPS_Servidores_root.crt error adding mozilla/IPS_Timestamping_root.crt error adding mozilla/NetLock_Business_=Class_B=_Root.crt error adding mozilla/NetLock_Express_=Class_C=_Root.crt error adding mozilla/NetLock_Notary_=Class_A=_Root.crt error adding mozilla/NetLock_Qualified_=Class_QA=_Root.crt error adding mozilla/Network_Solutions_Certificate_Authority.crt error adding mozilla/QuoVadis_Root_CA.crt error adding mozilla/QuoVadis_Root_CA_2.crt error adding mozilla/QuoVadis_Root_CA_3.crt error adding mozilla/RSA_Root_Certificate_1.crt error adding mozilla/RSA_Security_1024_v3.crt error adding mozilla/RSA_Security_2048_v3.crt error adding mozilla/SecureTrust_CA.crt error adding mozilla/Secure_Global_CA.crt error adding mozilla/Security_Communication_Root_CA.crt error adding mozilla/Sonera_Class_1_Root_CA.crt error adding mozilla/Sonera_Class_2_Root_CA.crt error adding mozilla/Staat_der_Nederlanden_Root_CA.crt error adding mozilla/Starfield_Class_2_CA.crt error adding mozilla/StartCom_Certification_Authority.crt error adding mozilla/StartCom_Ltd..crt error adding mozilla/SwissSign_Gold_CA_-_G2.crt error adding mozilla/SwissSign_Platinum_CA_-_G2.crt error adding mozilla/SwissSign_Silver_CA_-_G2.crt error adding mozilla/Swisscom_Root_CA_1.crt error adding mozilla/TC_TrustCenter__Germany__Class_2_CA.crt error adding mozilla/TC_TrustCenter__Germany__Class_3_CA.crt error adding mozilla/TDC_Internet_Root_CA.crt error adding mozilla/TDC_OCES_Root_CA.crt error adding mozilla/TURKTRUST_Certificate_Services_Provider_Root_1.crt error adding mozilla/TURKTRUST_Certificate_Services_Provider_Root_2.crt error adding mozilla/Taiwan_GRCA.crt error adding mozilla/Thawte_Personal_Basic_CA.crt error adding mozilla/Thawte_Personal_Freemail_CA.crt error adding mozilla/Thawte_Personal_Premium_CA.crt error adding mozilla/Thawte_Premium_Server_CA.crt error adding mozilla/Thawte_Server_CA.crt error adding mozilla/Thawte_Time_Stamping_CA.crt error adding mozilla/UTN-USER_First-Network_Applications.crt error adding mozilla/UTN_DATACorp_SGC_Root_CA.crt error adding mozilla/UTN_USERFirst_Email_Root_CA.crt error adding mozilla/UTN_USERFirst_Hardware_Root_CA.crt error adding mozilla/ValiCert_Class_1_VA.crt error adding mozilla/ValiCert_Class_2_VA.crt error adding mozilla/VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.crt error adding mozilla/Verisign_Class_1_Public_Primary_Certification_Authority.crt error adding mozilla/Verisign_Class_1_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_1_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_Class_2_Public_Primary_Certification_Authority.crt error adding mozilla/Verisign_Class_2_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_2_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt error adding mozilla/Verisign_Class_3_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_3_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_Class_4_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_4_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_RSA_Secure_Server_CA.crt error adding mozilla/Verisign_Time_Stamping_Authority_CA.crt error adding mozilla/Visa_International_Global_Root_2.crt error adding mozilla/Visa_eCommerce_Root.crt error adding mozilla/WellsSecure_Public_Root_Certificate_Authority.crt error adding mozilla/Wells_Fargo_Root_CA.crt error adding mozilla/XRamp_Global_CA_Root.crt error adding mozilla/beTRUSTed_Root_CA-Baltimore_Implementation.crt error adding mozilla/beTRUSTed_Root_CA.crt error adding mozilla/beTRUSTed_Root_CA_-_Entrust_Implementation.crt error adding mozilla/beTRUSTed_Root_CA_-_RSA_Implementation.crt error adding mozilla/thawte_Primary_Root_CA.crt error adding signet.pl/signet_ca1_pem.crt error adding signet.pl/signet_ca2_pem.crt error adding signet.pl/signet_ca3_pem.crt error adding signet.pl/signet_ocspklasa2_pem.crt error adding signet.pl/signet_ocspklasa3_pem.crt error adding signet.pl/signet_pca2_pem.crt error adding signet.pl/signet_pca3_pem.crt error adding signet.pl/signet_rootca_pem.crt error adding signet.pl/signet_tsa1_pem.crt error adding spi-inc.org/spi-ca-2003.crt error adding spi-inc.org/spi-cacert-2008.crt error adding telesec.de/deutsche-telekom-root-ca-2.crt failed (VM used: java-6-openjdk). dpkg: error processing ca-certificates-java (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ca-certificates-java E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • django error ,about django-sphinx

    - by zjm1126
    from django.db import models from djangosphinx.models import SphinxSearch class MyModel(models.Model): search = SphinxSearch() # optional: defaults to db_table # If your index name does not match MyModel._meta.db_table # Note: You can only generate automatic configurations from the ./manage.py script # if your index name matches. search = SphinxSearch('index_name') # Or maybe we want to be more.. specific searchdelta = SphinxSearch( index='index_name delta_name', weights={ 'name': 100, 'description': 10, 'tags': 80, }, mode='SPH_MATCH_ALL', rankmode='SPH_RANK_NONE', ) queryset = MyModel.search.query('query') results1 = queryset.order_by('@weight', '@id', 'my_attribute') results2 = queryset.filter(my_attribute=5) results3 = queryset.filter(my_other_attribute=[5, 3,4]) results4 = queryset.exclude(my_attribute=5)[0:10] results5 = queryset.count() # as of 2.0 you can now access an attribute to get the weight and similar arguments for result in results1: print result, result._sphinx # you can also access a similar set of meta data on the queryset itself (once it's been sliced or executed in any way) print results1._sphinx and Traceback (most recent call last): File "D:\zjm_code\sphinx_test\models.py", line 1, in <module> from django.db import models File "D:\Python25\Lib\site-packages\django\db\__init__.py", line 10, in <module> if not settings.DATABASE_ENGINE: File "D:\Python25\Lib\site-packages\django\utils\functional.py", line 269, in __getattr__ self._setup() File "D:\Python25\Lib\site-packages\django\conf\__init__.py", line 38, in _setup raise ImportError("Settings cannot be imported, because environment variable %s is undefined." % ENVIRONMENT_VARIABLE) ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.

    Read the article

  • ICollectionView.SortDescriptions sort does not work if underlying DataTable has zero rows

    - by BigBlondeViking
    We have a WPF app that has a DataGrid inside a ListView. private DataTable table_; We do a bunch or dynamic column generation ( depending on the report we are showing ) We then do the a query and fill the DataTable row by row, this query may or may not have data.( not the problem, an empty grid is expected ) We set the ListView's ItemsSource to the DefaultView of the DataTable. lv.ItemsSource = table_.DefaultView; We then (looking at the user's pass usage of the app, set the sort on the column) Sort Method below: private void Sort(string sortBy, ListSortDirection direction) { ICollectionView dataView = CollectionViewSource.GetDefaultView(lv.ItemsSource); dataView.SortDescriptions.Clear(); var sd = new SortDescription(sortBy, direction); dataView.SortDescriptions.Add(sd); dataView.Refresh(); } In the Zero DataTable rows scenario, the sort does not "hold"? and if we dynamically add rows they will not be in sorted order. If the DataTable has at-least 1 row when the sort is applied, and we dynamically add rows to the DataTable, the rows com in sorted correctly. I have built a standalone app that replicate this... It is an annoyance and I can add a check to see if the DataTable was empty, and re-apply the sort... Anyone know whats going on here, and am I doing something wrong? FYI: What we based this off if comes from the MSDN as well: http://msdn.microsoft.com/en-us/library/ms745786.aspx

    Read the article

  • C# Linq List Contains Similar Elements

    - by John Peters
    Hi All, I am looking for linq query to see if there exists a similar object I have an object graph as follows Cart myCart = new Cart { List<CartProduct> myCartProduct = new List<CartProduct> { CartProduct cartProduct1 = new CartProduct { List<CartProductAttribute> a = new List<CartProductAttribute> { CartProductAttribute cpa1 = new CartProductAttribute{ title="red" }, CartProductAttribute cpa2 = new CartProductAttribute{ title="small" } } } CartProduct cartProduct2 = new CartProduct { List<CartProductAttribute> d = new List<CartProductAttribute> { CartProductAttribute cpa3 = new CartProductAttribute{ title="john" }, CartProductAttribute cpa4 = new CartProductAttribute{ title="mary" } } } } } I would like to get from the Cart = a CartProduct that has the exact same CartProductAttribute title values as a CartProduct that I need to compare. No more and no less. E.G. I need to find a similar CartProduct that has a CartProductAttribute with title="red" and a cartProductAttribute with title="small" in myCart (eg 'cartProduct1' in the example) CartProduct cartProductToCompare = new CartProduct { List<CartProductAttribute> cartProductToCompareAttributes = new List<CartProductAttribute> { CartProductAttribute cpa5 = new CartProductAttribute{ title="red" }, CartProductAttribute cpa6 = new CartProductAttribute{ title="small" } } } So from object graph myCart cartProduct1 cpa1 (title=red) cpa2 (title=small) cartProduct2 cpa3 (title=john) cpa4 (title=mary) Linq query looking for cartProductToCompare cpa5 (title=red) cpa6 (title=small) Should find cartProduct1 Hope all this makes sense... Thanks

    Read the article

  • How to override ATTR_DEFAULT_IDENTIFIER_OPTIONS in Models in Doctrine?

    - by user309083
    Here someone explained that setting a 'primary' attribute for any row in your Model will override Doctrine_Manager's ATTR_DEFAULT_IDENTIFIER_OPTIONS attribute: http://stackoverflow.com/questions/2040675/how-do-you-override-a-constant-in-doctrines-models This works, however if you have a many to many relation whereby the intermediate table is created, even if you have set both columns in the intermediate to primary an error still results when Doctrine tries to place an index on the nonexistant 'id' column upon table creation. Here's my code: //Bootstrap // set the default primary key to be named 'id', integer, 4 bytes Doctrine_Manager::getInstance()->setAttribute( Doctrine_Core::ATTR_DEFAULT_IDENTIFIER_OPTIONS, array('name' => 'id', 'type' => 'integer', 'length' => 4)); //User Model class User extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('users'); } public function setUp() { $this->hasMany('Role as roles', array( 'local' => 'id', 'foreign' => 'user_id', 'refClass' => 'UserRole', 'onDelete' => 'CASCADE' )); } } //Role Model class Role extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('roles'); } public function setUp() { $this->hasMany('User as users', array( 'local' => 'id', 'foreign' => 'role_id', 'refClass' => 'UserRole' )); } } //UserRole Model class UserRole extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('roles_users'); $this->hasColumn('user_id', 'integer', 4, array('primary'=>true)); $this->hasColumn('role_id', 'integer', 4, array('primary'=>true)); } } Resulting error: SQLSTATE[42000]: Syntax error or access violation: 1072 Key column 'id' doesn't exist in table. Failing Query: "CREATE TABLE roles_users (user_id INT UNSIGNED NOT NULL, role_id INT UNSIGNED NOT NULL, INDEX id_idx (id), PRIMARY KEY(user_id, role_id)) ENGINE = INNODB". Failing Query: CREATE TABLE roles_users (user_id INT UNSIGNED NOT NULL, role_id INT UNSIGNED NOT NULL, INDEX id_idx (id), PRIMARY KEY(user_id, role_id)) ENGINE = INNODB I'm creating my tables using Doctrine::createTablesFromModels();

    Read the article

  • nhibernate configure and buildsessionfactory time

    - by davidsleeps
    Hi, I'm using Nhibernate as the OR/M tool for an asp.net application and the startup performance is really frustrating. Part of the problem is definitely me in my lack of understanding but I've tried a fair bit (understanding is definitely improving) and am still getting nowhere. Currently ANTS profiler has that the Configure() takes 13-18 seconds and the BuildSessionFActory() as taking about 5 seconds. From what i've read, these times might actually be pretty good, but they were generally talking about hundreds upon hundreds of mapped entities...this project only has 10. I've combined all the mapping files into a single hbm mapping file and this did improve things but only down to the times mentioned above... I guess, are there any "Traps for young players" that are regularly missed...obvious "I did this/have you enabled that/exclude file x/mark file y as z" etc... I'll try the serialize the configuration thing to avoid the Configure() stage, but I feel that part shouldn't be that long for that amount of entities and so would essentially be hiding a current problem... I will post source code or configuration if necessary, but I'm not sure what to put in really... thanks heaps! edit (more info) I'll also add that once this is completed, each page is extremely quick... configuration code- hibernate.cfg.xml <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="hibernate-configuration" type="NHibernate.Cfg.ConfigurationSectionHandler, NHibernate" /> </configSections> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property> <property name="connection.connection_string_name">MyAppDEV</property> <property name="cache.provider_class">NHibernate.Caches.SysCache.SysCacheProvider, NHibernate.Caches.SysCache</property> <property name="cache.use_second_level_cache">true</property> <property name="show_sql">false</property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property> <property name="current_session_context_class">managed_web</property> <mapping assembly="MyApp.Domain"/> </session-factory> </hibernate-configuration> </configuration> My SessionManager class which is bound and unbound in a HttpModule for each request Imports NHibernate Imports NHibernate.Cfg Public Class SessionManager Private ReadOnly _sessionFactory As ISessionFactory Public Shared ReadOnly Property SessionFactory() As ISessionFactory Get Return Instance._sessionFactory End Get End Property Private Function GetSessionFactory() As ISessionFactory Return _sessionFactory End Function Public Shared ReadOnly Property Instance() As SessionManager Get Return NestedSessionManager.theSessionManager End Get End Property Public Shared Function OpenSession() As ISession Return Instance.GetSessionFactory().OpenSession() End Function Public Shared ReadOnly Property CurrentSession() As ISession Get Return Instance.GetSessionFactory().GetCurrentSession() End Get End Property Private Sub New() Dim configuration As Configuration = New Configuration().Configure() _sessionFactory = configuration.BuildSessionFactory() End Sub Private Class NestedSessionManager Friend Shared ReadOnly theSessionManager As New SessionManager() End Class End Class edit 2 (log4net results) will post bits that have a portion of time between them and will cut out the rest... 2010-03-30 23:29:40,898 [4] INFO NHibernate.Cfg.Environment [(null)] - Using reflection optimizer 2010-03-30 23:29:42,481 [4] DEBUG NHibernate.Cfg.Configuration [(null)] - dialect=NHibernate.Dialect.MsSql2005Dialect ... 2010-03-30 23:29:42,501 [4] INFO NHibernate.Cfg.Configuration [(null)] - Mapping resource: MyApp.Domain.Mappings.hbm.xml 2010-03-30 23:29:43,342 [4] INFO NHibernate.Dialect.Dialect [(null)] - Using dialect: NHibernate.Dialect.MsSql2005Dialect 2010-03-30 23:29:50,462 [4] INFO NHibernate.Cfg.XmlHbmBinding.Binder [(null)] - Mapping class: ... 2010-03-30 23:29:51,353 [4] DEBUG NHibernate.Connection.DriverConnectionProvider [(null)] - Obtaining IDbConnection from Driver 2010-03-30 23:29:53,136 [4] DEBUG NHibernate.Connection.ConnectionProvider [(null)] - Closing connection

    Read the article

  • proxy pass for activeMQ

    - by user1172482
    I have a apache server that I'm trying to use for proxy access my activeMQ admin page. I am able to load the inital landing page properly, but I can't seem to load any of the sub-pages (Queues, Connections, etc.). My proxypass rules on the apache server are the following: ProxyPass /foo http://10.5.124.108:8161/admin ProxyPassReverse /foo http://10.5.124.108:8161/admin The activeMQ installation included a activemq-httpd.conf file in /etc/httpd/conf.d/. Proxy connections there are enabled: ProxyRequests On ProxyVia On <Proxy *> Allow from all Order allow,deny </Proxy> ProxyPass /admin http://localhost:8161/admin ProxyPassReverse /admin http://localhost:8161/admin ProxyPass /message http://localhost:8161/admin/send ProxyPassReverse /message http://localhost:8161/admin/send From what I've read the proxypass rules should be recursive (the rule for /foo should also work for /foo/bar). Is there something else that I'm missing here that's preventing me from accessing pages beyond the initial admin landing page?

    Read the article

  • MS Chart in WPF, Setting the DataSource is not creating the Series

    - by Shaik Phakeer
    Hi All, Here I am trying to assign the datasource (using same code given in the sample application) and create a graph, only difference is i am doing it in WPF WindowsFormsHost. due to some reason the datasource is not being assigned properly and i am not able to see the series ("Series 1") being created. wired thing is that it is working in the Windows Forms application but not in the WPF one. am i missing something and can somebody help me? Thanks <Window x:Class="SEDC.MDM.WinUI.WindowsFormsHostWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:wf="clr-namespace:System.Windows.Forms;assembly=System.Windows.Forms" xmlns:CHR="clr- namespace:System.Windows.Forms.DataVisualization.Charting;assembly=System.Windows.Forms.Dat aVisualization" Title="HostingWfInWpf" Height="230" Width="338"> <Grid x:Name="grid1"> </Grid> </Window> private void drawChartDataBinding() { System.Windows.Forms.Integration.WindowsFormsHost host = new System.Windows.Forms.Integration.WindowsFormsHost(); string fileNameString = @"C:\Users\Shaik\MSChart\WinSamples\WinSamples\data\chartdata.mdb"; // initialize a connection string string myConnectionString = "PROVIDER=Microsoft.Jet.OLEDB.4.0;Data Source=" + fileNameString; // define the database query string mySelectQuery = "SELECT * FROM REPS;"; // create a database connection object using the connection string OleDbConnection myConnection = new OleDbConnection(myConnectionString); // create a database command on the connection using query OleDbCommand myCommand = new OleDbCommand(mySelectQuery, myConnection); Chart Chart1 = new Chart(); // set chart data source Chart1.DataSource = myCommand; // set series members names for the X and Y values Chart1.Series"Series 1".XValueMember = "Name"; Chart1.Series"Series 1".YValueMembers = "Sales"; // data bind to the selected data source Chart1.DataBind(); myCommand.Dispose(); myConnection.Close(); host.Child = Chart1; this.grid1.Children.Add(host); } Shaik

    Read the article

  • Mysql SELECT FOR UPDATE - strange issue

    - by Michal Fronczyk
    Hi, I have a strange issue (at least for me :)) with the MySQL's locking facility. I have a table: Create Table: CREATE TABLE test ( id int(11) NOT NULL AUTO_INCREMENT, PRIMARY KEY (id) ) ENGINE=InnoDB AUTO_INCREMENT=13 DEFAULT CHARSET=latin1 With this data: +----+ | id | +----+ | 3 | | 4 | | 5 | | 6 | | 7 | | 8 | | 10 | | 11 | | 12 | +----+ Now I have 2 clients with these commands executed at the beginning: set autocommit=0; set session transaction isolation level serializable; begin; Now the most interesting part. The first client executes this query: (makes an intent to insert a row with id equal to 9) SELECT * from test where id = 9 FOR UPDATE; Empty set (0.00 sec) Then the second client does the same: SELECT * from test where id = 9 FOR UPDATE; Empty set (0.00 sec) My question is: Why the second client does not block ? An exclusive gap lock should have been set by the first query because FOR UPDATE have been used and the second client should block. If I am wrong, could somebody tell me how to do it correctly ? The MySql version I use is: 5.1.37-1ubuntu5.1 Thanks, Michal

    Read the article

  • EF4 querying from parent to grandchildren

    - by Hans Kesting
    I have a model withs Parents, Children and Grandchildren, in a many-to-many relationship. Using this article I created POCO classes that work fine, except for one thing I can't yet figure out. When I query the Parents or Children directly using LINQ, the SQL reflects the LINQ query (a .Count() executes a COUNT in the database and so on) - fine. The Parent class has a Children property, to access it's children. But (and now for the problem) this doesn't expose an IQueryable interface but an ICollection. So when I access the Children property on a particular parent all the Parent's Children are read. Even worse, when I access the Grandchildren (theParent.Children.SelectMany(child => child.GrandChildren).Count()) then for each and every child a separate request is issued to select all data of it's grandchildren. That's a lot of separate queries! Changing the type of the Children property from ICollection to IQueryable doesn't solve this. Apart from missing methods I need, like Add() and Remove(), EF just doesn't recognize the navigation property then. Are there correct ways (as in: low database interaction) of querying through children (and what are they)? Or is this just not possible?

    Read the article

  • ASP.Net Entity Framework Repository & Linq

    - by Chris Klepeis
    My scenario: This is an ASP.NET 4.0 web app programmed via C# I implement a repository pattern. My repositorys all share the same ObjectContext, which is stored in httpContext.Items. Each repository creates a new ObjectSet of type E. Heres some code from my repository: public class Repository<E> : IRepository<E>, IDisposable where E : class { private DataModelContainer _context = ContextHelper<DataModelContainer>.GetCurrentContext(); private IObjectSet<E> _objectSet; private IObjectSet<E> objectSet { get { if (_objectSet == null) { _objectSet = this._context.CreateObjectSet<E>(); } return _objectSet; } } public IQueryable<E> GetQuery() { return objectSet; } Lets say I have 2 repositorys, 1 for states and 1 for countrys and want to create a linq query against both. Note that I use POCO classes with the entity framework. State and Country are 2 of these POCO classes. Repository stateRepo = new Repository<State>(); Repository countryRepo = new Repository<Country>(); IEnumerable<State> states = (from s in _stateRepo.GetQuery() join c in _countryRepo.GetQuery() on s.countryID equals c.countryID select s).ToList(); Debug.WriteLine(states.First().Country.country) essentially, I want to retrieve the state and the related country entity. The query only returns the state data... and I get a null argument exception on the Debug.WriteLine LazyLoading is disabled in my .edmx... thats the way I want it.

    Read the article

  • Does the WMI event Win32_VolumeChangeEvent work on Windows XP

    - by Christian Rodemeyer
    I'm trying to use the following c# code to detect the attached/removed event of usb mass storage devices. I'm using the Win32_VolumeChangeEvent. // Initialize an event watcher and subscribe to events that match this query var _watcher = new ManagementEventWatcher("select * from Win32_VolumeChangeEvent"); _watcher.EventArrived += OnDeviceChanged; _watcher.Start(); void OnDeviceChanged(object sender, EventArrivedEventArgs args) { Console.WriteLine(args.NewEvent.GetText(TextFormat.Mof)); } The problem is that this works fine on Vista but it doesn't work on XP at all (no events received). The Microsoft documentation says that this should work (http://msdn.microsoft.com/en-us/library/aa394516(VS.85).aspx). I googled for this quite a while and found other that have this problem too. But I also found a couple of articles which claim that this kind of query (mostly in vbscript) works with XP. But I cannot find some offical information from microsoft for this issue and I can't believe that Microsoft have overlooked this issue for three service packs. So my question is: has anybody used the Win32_VolumeChangeEvent with success on XP or can provide a link/explanation why it shouldn't work on XP?

    Read the article

  • Enabling Service Broker in SQL Server 2008

    - by Truegilly
    Hello, I am integrating SqlCacheDependency to use in my LinqToSQL datacontext. I am using an extension class for Linq querys found here - http://code.msdn.microsoft.com/linqtosqlcache I have wired up the code and when I open the page I get this exception - "The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications." its coming from this event in the global.asax protected void Application_Start() { RegisterRoutes(RouteTable.Routes); //In Application Start Event System.Data.SqlClient.SqlDependency.Start(new dataContextDataContext().Connection.ConnectionString); } my question is... how do i enable Service Broker in my SQL server 2008 database? I have tried to run this query.. ALTER DATABASE tablename SET ENABLE_BROKER but it never ends and runs for ever, I have to manually stop it. once I have this set in SQL server 2008, will it filter down to my DataContext, or do I need to configure something there too ? thanks for any help Truegilly

    Read the article

  • Segmentation Fault (11) with modwsgi on CentOS 5.7 when running pyramid app

    - by carbotex
    I'm getting Segmentation fault error when trying to access the "Hello World" pyramid app. This error only occurs when running against CentOS 5.7 setup, but no problem whatsoever when tested against OSX and Arch Linux. Could it be a CentOS specific issue? [error] [client 10.211.55.2] Premature end of script headers: pyramid.wsgi [notice] child pid 31212 exit signal Segmentation fault (11) I have tried to follow the troubleshooting guides posted here http://code.google.com/p/modwsgi/wiki/InstallationIssues which suggests that it might caused by missing Shared Library. A quick check reveals that shared library is not the issue. [centos57@localhost modules]$ ldd mod_wsgi.so linux-gate.so.1 => (0x00e6a000) libpython2.7.so.1.0 => /home/python/lib/libpython2.7.so.1.0 (0x0024c000) libpthread.so.0 => /lib/libpthread.so.0 (0x00da8000) libdl.so.2 => /lib/libdl.so.2 (0x00cd6000) libutil.so.1 => /lib/libutil.so.1 (0x00110000) libm.so.6 => /lib/libm.so.6 (0x0085c000) libc.so.6 => /lib/libc.so.6 (0x00682000) /lib/ld-linux.so.2 (0x0012b000) Then I found another clue that might be able to solve my problem. Unfortunately libexpat is not the source of the problem. http://code.google.com/p/modwsgi/wiki/IssuesWithExpatLibrary [centos57@localhost bin]$ ldd ~/httpd/bin/httpd | grep expat libexpat.so.1 => /usr/local/lib/libexpat.so.1 (0x00b00000) [centos57@localhost bin]$ strings /usr/local/lib/libexpat.so.1 | grep expat libexpat.so.1 expat_2.0.1 [centos57@localhost bin]$ python Python 2.7.2 (default, Nov 26 2011, 08:08:44) [GCC 4.1.2 20080704 (Red Hat 4.1.2-51)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pyexpat >>> pyexpat.version_info (2, 0, 0) >>> I've been pulling my hair out trying to figure out what I'm missing in my setup. Why the problem only occurs with CentOS? Here is the detailed setup: Apache 2.2.19 Python 2.7.2 mod_wsgi-3.3 /home/httpd/conf/extra/pyramid.wsgi from pyramid.paster import get_app application = get_app('/home/homecamera/hcadmin/root/production.ini', 'main') /home/httpd/conf/extra/modwsgi.conf LoadModule wsgi_module modules/mod_wsgi.so WSGIScriptAlias /myapp /home/root/test.wsgi <Directory /home/root> WSGIProcessGroup pyramid Order allow,deny Allow from all </Directory> # Use only 1 Python sub-interpreter. Multiple sub-interpreters # play badly with C extensions. WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIDaemonProcess pyramid user=daemon group=daemon processes=1 \ threads=4 \ python-path=/home/python/lib/python2.7/site-packages WSGIScriptAlias /hello /home/httpd/conf/extra/pyramid.wsgi <Directory /home/httpd/conf/extra> WSGIProcessGroup pyramid Order allow,deny Allow from all </Directory> Again this same setup works on OSX and Arch Linux but not on CentOS 5.7. Could someone out there point me to the right direction before I ran out of my hair. ==================================================================================== When apache started with gdb, I got a couple of warnings Reading symbols from /home/httpd/bin/httpd...done. Attaching to program: /home/httpd/bin/httpd, process 1821 warning: .dynamic section for "/lib/libcrypt.so.1" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/lib/libutil.so.1" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations gdb output. After hitting refresh button, to load pyramid. (gdb) cont Continuing. warning: .dynamic section for "/usr/lib/libgssapi_krb5.so.2" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/usr/lib/libkrb5.so.3" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/lib/libresolv.so.2" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x8edbb90 (LWP 1824)] 0x0814c120 in EVP_PKEY_CTX_dup () apache_error_log [info] mod_wsgi (pid=1821): Starting process 'pyramid' with threads=1. [info] mod_wsgi (pid=1821): Initializing Python. [info] mod_wsgi (pid=1821): Attach interpreter ''. [info] mod_wsgi (pid=1821): Create interpreter 'web.domain.com:20000|/hcadmin'. [info] [client 10.211.55.2] mod_wsgi (pid=1821, process='pyramid', application='web.domain.com:20000|/hcadmin'): Loading WSGI script '/home/httpd/conf/extra/pyramid.wsgi'. [error] hello 1

    Read the article

  • SQL Server 2005 jobs running twice in a row - using LiteSpeed

    - by Malnizzle
    Howdy! I have a SQL server (2005) backing up to a network share, who has a group of maintenance plans setup through LiteSpeed to backup different DBs. They were just set up to run two sub plans on different schedules for full/diff backups and did that just fine for a couple of months. Then I added "Clean Up" task to the subplans. Ever since that point, the backup creates another bak right after the first bak job is completed. I removed the clean up item from the subplan, and it still creates two baks when ran. Both the SQL Activity Monitor and the machine's windows application log show just one job being executed. I did this same thing to a couple of other servers backing up to the same location, and they are behaving correctly. Thoughts?

    Read the article

  • Writing an SVN hook that updates copy of committed code

    - by Jordan Reiter
    I have a SVN repository with a lot of sub-projects stored in it. Right now in my post-commit I just loop through all possible folders on the machine and run svn update on each: REPOS="$1" REV="$2" DIRS=("/path/to/local/copy/firstproject" "/path/to/local/copy/anotherproject" ... "/path/to/local/copy/spam") LOGNAME=`/usr/bin/whoami` for DIR in ${DIRS[@]} do cd $DIR sudo /usr/bin/svn update --accept=postpone 2>&1 | logger logger "$LOGNAME Updated $DIR to revision $REV (from $REPOS) " done The problem is that this is slow and redundant when I'm just committing the subfolder of one of the projects. I'm wondering if there's a better way of identifying which of the DIRS I should use and only update that one. Is there some way to do this? As far as I can tell there's no way to determine which part of a repo was committed and thus which directory needs to be updated. Is the only alternative to create a separate repository for each project? (Probably should have done that from the start, if so...)

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >