Search Results

Search found 13033 results on 522 pages for 'catalog objects'.

Page 87/522 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • What is the best way to properly test object equality against an array of objects?

    - by radesix
    My objective is to abort the NSXMLParser when I parse an item that already exists in cache. The basic flow of the program works like this: 1) Program starts and downloads an XML feed. Each item in the feed is represented by a custom object (FeedItem). Each FeedItem gets added to an array. 2) When the parsing is complete the contents of the array (all FeedItem objects) are archived to the disk. The next time the program is executed or the feed is refreshed by the user I begin parsing again; however, since a cache (array) now exists as each item is parsed I want to see if the object exists in the cache. If it does then I know I have downloaded all the new items and no longer need to continue parsing. What I am learning, I think, is that I can't use indexOfObject or indexOfObjectIDenticalTo: because these really seem to be checking to see that the objects are using the same memory address (thus identical). What I want to do is see if the contents of the object are equal (or at least some of the contents). I've done some research and found that I can override the IsEqual method; however, I really don't want to iterate/enumerate through the entire cache contents table for every newly parsed XML FeedItem. Is iterating through the collection and testing each one for equality the only way to do this or is there a better technique I am not aware of? Currently I am using the following code though I know it needs to change: NSUInteger index = [self.feedListCache.feedList indexOfObject:self.currentFeedItem]; if (index == NSNotFound) { }

    Read the article

  • Objects leaking immediately from allocation using either new or [[Object alloc] init];

    - by Sam
    While running Instruments to find leaks in my code, after I've loaded a file and populate an NSMutableArray with new objects, leaks pop up! I am correctly releasing the objects. Sample code below: //NSMutableArray declared as a retained property in the parent class if(!mutableArray) mutableArray = [[NSMutableArray alloc] initWithCapacity:objectCount]; else [mutableArray removeAllObjects]; //Iterates through the read in data and populate the NSMutableArray for(int i = 0; i < objectCount; i++){ //Initializes a new object with data MyObject *object = [MyObject new]; //Adds the object to the mutableArray [mutableArray addObject:object]; //Releases the object [object release]; } I get a number of leaks from Instruments terminating at the addition of the 'object' into the 'mutableArray', but also including the allocation of the 'object' and the 'mutableArray'. I don't get it. Not to mention, this is happening on the first call of the enclosing method so the allocation of the NSMutableArray is being hit in the logic block, not the 'removeAllObjects' selector. Lastly, does Core Foundation have a major bug in it that randomly creates CFStrings and mismanages their memory? My code does not even use those, nor do the leaks where they occur have anything to do with my code. Almost all of my applications so far deal with OpenGL (in case anyone knows of a threading issue that arises from trying to synch the backend of the program with the front end of displaying the contents of an NSOpenGLView class or whatever it is).

    Read the article

  • puppet cert mismatch in ec2

    - by Stick
    I'm setting up a puppetmaster (2.7.6) in ec2 via gems (on rhel6) and I'm running into problems with the cert names and getting the master able to talk to itself. my puppet.conf looks like this: [main] logdir = /var/log/puppet rundir = /var/run/puppet vardir = /var/lib/puppet ssldir = $vardir/ssl pluginsync = true environment = production report = true certname = master When I start the puppetmaster process the ssl directory looks like: ssl/private_keys/master.pem ssl/crl.pem ssl/public_keys/master.pem ssl/ca/ca_crl.pem ssl/ca/signed/master.pem ssl/ca/ca_crt.pem ssl/ca/ca_pub.pem ssl/ca/ca_key.pem ssl/certs/ca.pem ssl/certs/master.pem I have an /etc/hosts entry on the box to point the 'puppet' hostname to localhost so that I don't have to change the 'server' option. When I run the agent I get the following: # puppet agent --test info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Server hostname 'puppet' did not match server certificate; expected master err: /File[/var/lib/puppet/lib]: Could not evaluate: Server hostname 'puppet' did not match server certificate; expected master Could not retrieve file metadata for puppet://puppet/plugins: Server hostname 'puppet' did not match server certificate; expected master err: Could not retrieve catalog from remote server: Server hostname 'puppet' did not match server certificate; expected master warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: Server hostname 'puppet' did not match server certificate; expected master If I specify the certname as the server (with corresponding hosts entry) I get: # puppet agent --test --server master info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://master/plugins info: Caching catalog for master info: Applying configuration version '1321805956' notice: Finished catalog run in 0.05 seconds Which is success of a sort, that source error will bite me later when I'm applying manifests. I've tried a couple of other variations with using the ec2 private hostname and gotten mixed results. I'd like to avoid setting server = 'x' and use dns/hosts to control what 'puppet' resolves to in order to decide which server (plays easier with availability zones, etc)

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

  • Upgrade Your Existing BI Publisher 11g (11.1.1.3) to 11.1.1.5

    - by Kan Nishida
    It’s already more than a month now since BI Publisher 11.1.1.5 was released at beginning of May. Have you already tried out many of the great new features? If you are already running on the first version of BI Publisher 11g (11.1.1.3) you might wonder how to upgrade the existing BI Publisher to the 11.1.1.5 version. There are two ways to do this, one is ‘Out-Place’ and another is ‘In-Place’. The ‘Out-Place’ would be quite simple. Basically you will need to install the whole BI or just BI Publisher standalone R11.1.1.5 at a different location then you can switch the catalog to the existing one so that all the reports will be there in the new 11.1.1.5 environment. But sometimes things are not that simple, you might have some custom applications or configuration on the original environment and you want to keep all of them with the upgraded environment. For such scenarios, there is the ‘In-Place’ upgrade, which overrides on top of the original environment only the parts relevant for BI and BI Publisher, and that’s what I’m going to talk about today. Here is the basic steps of the ‘In-Place’ upgrade. Upgrade WebLogic Server to 10.3.5 Upgrade BI System to 11.1.1.5 Upgrade Database Schema Re-register BI Components Upgrade FMW (Fusion Middleware) Configuration Upgrade BI Catalog There is a section that talks about this upgrade from 11.1.1.3 to 11.1.1.5 as part of the overall upgrade document. But I hope my blog post summarized it and made it simple for you to cover only what’s necessary. Upgrade Document: http://download.oracle.com/docs/cd/E21764_01/bi.1111/e16452/bi_plan.htm#BABECJJH Before You Start Stop BI System and Backup I can’t emphasize enough, but before you start PLEASE make sure you take a backup of the existing environments first. You want to stop all WebLogic Servers, Node Manager, OPMN, and OPMN-managed system components that are part of your Oracle BI domains. If you’re on Windows you can do this by simply selecting ‘Stop BI Services’ menu. Then backup the whole system. Upgrade WebLogic Server to 10.3.5 Download WebLogic Server 10.3.5 Upgrade Installer With BI 11.1.1.3 installation your WebLogic Server (WLS) is 10.3.3 and you need to upgrade this to 10.3.5 before upgrading the BI part. In order to upgrade you will need this 10.3.5 upgrade version of WLS, which you can download from our support web site (https://support.oracle.com) You can find the detail information about the installation and the patch numbers for the WLS upgrade installer on this document. Just for your short cut, if you are running on Windows or Linux (x86) here is the patch number for your platform. Windows 32 bit: 12395517: Linux: 12395517 Upgrade WebLogic Server 1. After unzip the downloaded file, launch wls1035_upgrade_win32.exe if you’re on Windows. 2. Accept all the default values and keep ‘Next’ till end, and start the upgrade. Once the upgrade process completes you’ll see the following window. Now let’s move to the BI upgrade. Upgrade BI Platform to 11.1.1.5 with Software Only Install Download BI 11.1.1.5 You can download the 11.1.1.5 version from our OTN page for your evaluation or development. For the production use it’s recommended to download from eDelivery. 1. Launch the installer by double click ‘setup.exe’ (for Windows) 2. Select ‘Software Only Install’ option 3. Select your original Oracle Home where you installed BI 11.1.1.3. 4. Click ‘Install’ button to start the installation. And now the software part of the BI has been upgraded to 11.1.1.5. Now let’s move to the database schema upgrade. Upgrade Database Schema with Patch Assistant You need to upgrade the BIPLATFORM and MDS Schemas. You can use the Patch Assistant utility to do this, and here is an example assuming you’ve created the schema with ‘DEV’ prefix, otherwise change it with yours accordingly. Upgrade BIPLATFORM schema (if you created this schema with DEV_ prev) psa.bat -dbConnectString localhost:1521:orcl -dbaUserName sys -schemaUserName DEV_BIPLATFORM Upgrade MDS schema (if you created this schema with DEV_ prev) psa.bat -dbConnectString localhost:1521:orcl -dbaUserName sys -schemaUserName DEV_MDS Re-register BI System components Now you need to re-register your BI system components such as BI Server, BI Presentation Server, etc to the Fusion Middleware system. You can do this by running ‘upgradenonj2eeapp.bat (or .sh)’ command, which can be found at %ORACLE_HOME%/opmn/bin. Before you run, you need to start the WLS Server and make sure your WLS environment is not locked. If it’s locked then you need to release the system from the Fusion Middleware console before you run the following command. Here is the syntax for the ‘upgradenonj2eeapp.bat (or .sh) command.  upgradenonj2eeapp.bat    -oracleInstance Instance_Home_Location    -adminHost WebLogic_Server_Host_Name    -adminPort administration_server_port_number    -adminUsername administration_server_user And here is an example: cd %BI_HOME%\opmn\bin upgradenonj2eeapp.bat -oracleInstance C:\biee11\instances\instance1 -adminHost localhost -adminPort 7001 -adminUsername weblogic Upgrade Fusion Middleware Configuration There are a couple things on the Fusion Middleware need to be upgraded for the BI system to work. Here is a list of the components to upgrade. Upgrade Shared Library (JRF) Upgrade Fusion Middleware Security (OPSS) Upgrade Code Grants Upgrade OWSM Policy Repository Before moving forward, you need to stop the WebLogic Server. Here is an example. cd %MW_HOME%user_projects\domains\bifoundation_domain\binstopWebLogic.cmd And, let’s start with ‘Upgrade Shared Library (JRF)’. Upgrade Shared Library (JRF) You can use updateJRF() WLST command to upgrade the shared libraries in your domain. Before you do this, you need to stop all running instances, Managed Servers, Administration Server, and Node Manager in the domain. Here is an example of the ‘upgradeJRF()’ command: cd %MW_HOME%\oracle_common\common\bin wlst.cmd upgradeJRF('C:/biee11/user_projects/domains/bifoundation_domain') Upgrade Fusion Middleware Security (OPSS) This step is to upgrade the Fusion Middleware security piece. You can use ‘upgradeOpss()’ WLST command. Here is a syntax for the command. upgradeOpss(jpsConfig="existing_jps_config_file", jaznData="system_jazn_data_file") The ‘existing jps-config.xml file can be found under %DOMAIN_HOME%/config/fmwconfig/jps-config.xml and the ‘system_jazn_data_file’ can be found under %MW_HOME%/oracle_common/modules/oracle.jps_11.1.1/domain_config/system-jazn-data.xml. And here is an example: cd %MW_HOME%\oracle_common\common\bin wlst.cmd upgradeOpss(jpsConfig="c:/biee11/user_projects/domains/bifoundation_domain/config/fmwconfig/jps-config.xml", jaznData="c:/biee11/oracle_common/modules/oracle.jps_11.1.1/domain_config/system-jazn-data.xml") exit() Upgrade Code Grants for Oracle BI Domain And this is the last step for the Fusion Middleware platform upgrade task. You need to run this python script ‘bi-upgrade.py‘ script to configure the code grants necessary to ensure that SSL works correctly for Oracle BI. However, even if you don’t use SSL, you still need to run this script. And if you have multiple BI domains (Enterprise deployment) then you need to run this on each domain. Here is an example: cd %MW_HOME%\oracle_common\common\bin wlst c:\biee11\Oracle_BI1\bin\bi-upgrade.py --bioraclehome c:\biee11\Oracle_BI1 --domainhome c:\biee11\user_projects\domains\bifoundation_domain Upgrade OWSM Policy Repository This is to upgrade OWSM (Oracle Web Service Manager) policy repository, you can use WLST command ‘upgradeWSMPolicyRepository()’. In order to run this command you need to have your WebLogic Server up-and-running. Here is an example. cd %MW_HOME%user_projects\domains\bifoundation_domain\binstopWebLogic.cmd cd %MW_HOME%\oracle_common\common\bin wlst.cmd connect ('weblogic','welcome1','t3://localhost:7001') upgradeWSMPolicyRepository() exit() Upgrade BI Catalogs This step is required only when you have your BI Publisher integrated with BIEE. If your BI Publisher is deployed as a standalone then you don’t need to follow this step. Now finally, you can upgrade the BI catalog. This won’t upgrade your BI Publisher reports themselves, but it just upgrades some attributes information inside the catalog. Before you do this upgrade, make sure the BI system components are not running. You can check the status by the command below. opmnctl status You can do the upgrade by updating a configuration file ‘instanceconfig.xml’, which can be found at %BI_HOME%\instances\instance1\config\coreapplication_obips1, and change the value of ‘UpgradeAndExit’ to be ‘true’. Here is an example: <ps:Catalog xmlns:ps="oracle.bi.presentation.services/config/v1.1"> <ps:UpgradeAndExit>true</ps:UpgradeAndExit> </ps:Catalog> After you made the change and save the file, you need to start the BI Presentation Server. This time you want to start only the BI Presentation Server instead of starting all the servers. You can use ‘opmnctl’ to do so, and here is an example. cd %ORACLE_INSTANCE%\bin opmnctl startproc ias-component=coreapplication_obips1 This would upgrade your BI Catalog to be 11.1.1.5. After the catalog is updated, you can stop the BI Presentation Server so that you can modify the instanceconfig.xml file again to revert the upgradeAndExit value back to ‘false’. Start Explore BI Publisher 11.1.1.5 After all the above steps, you can start all the BI Services, access to the same URL, now you have your BI Publisher and/or BI 11.1.1.5 in your hands. Have fun exploring all the new features of R11.1.1.5!

    Read the article

  • A New Year’s Celebration in June

    - by Kristin Rose
    Happy Oracle New Year Everyone! Last week marked the official start to FY13 and we could not be more pleased with all that lies ahead this quarter, and all that we accomplished in the last…especially our newly updated Oracle PartnerNetwork (OPN) Solutions Catalog. If you thought it was great before, just wait until you see it now. We are ringing in our New Year right by fully equipping partners with the necessary tools they need to have another successful year. The Solutions Catalog will help draw attention to your partner services and offerings, highlighting your expertise. The Solutions Catalog is a centralized and easy way to navigate this customer friendly site. Some of the exciting advancements include: A streamlined search interface A robust lead capture tool that requests the contact information of potential customers A professional display of customer recommendations to showcase your skill set A partner dashboard with enhanced profile creation and an improved publication process Most exciting of all, updating your profile is easier than ever with the updated partner dashboard. Keeping your partner profile up to date will help to ensure customers are looking at the correct information about your company, and can easily stay on-top of any new developments or Specializations you receive. So don’t cut yourself short, be sure to update your profile today if you haven’t already done so. For more information on the exciting upgrades available to you, visit the ‘Resources for Partners’ page or watch Takane Aizeki, Principle Portal Manager at Oracle, walk through the upgraded Solutions Catalog and the different ways to showcase your value as an Oracle solution provider. Cheers,Lydia SmyersGroup Vice PresidentWWA&C and Communications

    Read the article

  • Structuring database for multi-object "activity" and "following" functionalities

    - by romaninsh
    I am working on a web application which operate with different types of objects such as user, profiles, pages etc. All objects have unique object_id. When objects interact it may produce "activity", such as user posting on the page or profile. Activity may be related to multiple objects through their object_id. Users may also follow "objects" and they need to be able to see stream of relevant activity. Could you provide me with some data structure suggestions which would be efficient and scalable? My goal is to show activity limited to the objects which user is following I am not limited by relational databases. Update As I'm getting advices on ORM and how index things, I'd like to again, stress my question. According to my current design model the database structure looks like this: As you can see - it's quite easy to implement database like that. Activity and Follower tables do contain much larger amount of records than the upper level but it's tolerable. But when it comes for me to create a "timeline" table, it becomes a nightmare. For every user I need to reference all the object activities which he follows. In terms of records it easily gets out of control. Please suggest me how to change this structure to avoid timeline creation and also be abel to quickly retrieve activity for any given user. Thanks.

    Read the article

  • Best way to setup hosts, subdomains, and IPs [closed]

    - by LynnOwens
    I own a domain, let's call it mydomain.com. I need to host the following off it: forums.mydomain.com www.mydomain.com blog.mydomain.com objects.mydomain.com I believe I can get 5 static IPs. I plan on assigning one each to those four hosts. Then I need to adhoc create names, all below objects.mydomain.com. For instance: one.objects.mydomain.com two.objects.mydomain.com three.objects.mydomain.com I need to create these names programatically, and without human intervention. Preferably, they would not get their own IPs. They would use the IP of objects.mydomain.com. First question: Does this mean that I need to host my own DNS? Second question: I'm using Apache as a web server. What does the virtual host configuration look like? I was experimenting with the following to understand how routing on domain names works and I always ended up at www. <VirtualHost *:80> ServerAdmin [email protected] ServerName www.mydomain.com ServerAlias www.mydomain.com DocumentRoot "E:/Static/www" RewriteEngine On RewriteRule ^(/www/.*) /www$1 </Virtualhost> <VirtualHost *:80> ServerAdmin [email protected] ServerName forums.mydomain.com ServerAlias forums.mydomain.com DocumentRoot "E:/Static/forums" RewriteEngine On RewriteRule ^(/forums/.*) /forums$1 </Virtualhost>

    Read the article

  • Filter a list of objects using a property that is another list. Usign linq.

    - by Luís Custódio
    I've a nice situation, I think at beginning this a usual query but I'm having some problem trying to solve this, the situation is: I've a list of "Houses", and each house have a list of "Windows". And I want to filter for a catalog only the Houses witch have a Blue windows, so my extension method of House is something like: public static List<House> FilterByWindow (this IEnumerable<House> houses, Window blueOne){ houses.Select(p=>p.Windows.Where(q=>q.Color == blueOne.Color)); return houses.ToList(); } Is this correct or I'm losing something? Some better suggestion?

    Read the article

  • Django QuerySet ordering by number of reverse ForeignKey matches

    - by msanders
    I have the following Django models: class Foo(models.Model): title = models.CharField(_(u'Title'), max_length=600) class Bar(models.Model): foo = models.ForeignKey(Foo) eg_id = models.PositiveIntegerField(_(u'Example ID'), default=0) I wish to return a list of Foo objects which have a reverse relationship with Bar objects that have a eg_id value contained in a list of values. So I have: id_list = [7, 8, 9, 10] qs = Foo.objects.filter(bar__eg_id__in=id_list) How do I order the matching Foo objects according to the number of related Bar objects which have an eg_id value in the id_list?

    Read the article

  • Is it okay to programmatically create UITouch and UIEvent objects to emulate touch events?

    - by mystify
    I want to simulate some touches on my UI, without using private API. So one simple way to do it is to simply call those -touchesBegan:withEvent:, -touchesMoved:withEvent:, -touchesEnded:WithEvent: and -touchesCancelled:withEvent: methods inside my custom controls. For that, I would have to create UITouch and UIEvent dummy objects with appropriate data inside. Is this fine with them? Or would they reject my app?

    Read the article

  • List of objects sent over WCF, but null list received?

    - by GONeale
    Hey there, I have an object containing a list of custom objects which I am returning over a response in WCF, however on the receiving end, the list is null? But it contains 112 objects just prior to stepping out of the service on the server. This wasn't always the case, I have seen it return a list. I've just recently upgraded it to use NET TCP bindings, but I can't confirm when I started losing the data or if it was since the conversion from wsHttpBinding to netTcpBinding as it moved along with about four other services. I have looked on the WCF Service messages and trace file and also the WCF client's messages and trace file, no exceptions reported, and both message logs indicate they are sending the List<T> and for client, receiving the list - very frustrating! It's not a super light array, but not huge either, around 100KB. it has about 12 properties each and as stated 112 items are being sent. I have tried everything I can think of on client and server, note: Client: this.binding = new NetTcpBinding(SecurityMode.None) { MaxReceivedMessageSize = int.MaxValue, ReaderQuotas = { MaxStringContentLength = int.MaxValue, MaxArrayLength = int.MaxValue } }; ... Server app.config (sorry I have no idea if the quota settings have any bearing on net tcp? I only just added it similar to what I use for wsHttpBinding to test, but still list is null): <netTcpBinding> <binding name="SecurityByNetTcpTransportBinding" sendTimeout="00:03:00" maxReceivedMessageSize="2147483647"> <readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" /> <security mode="None" /> </binding> </netTcpBinding> and something else I tried in my net tcp binding behavior: <dataContractSerializer maxItemsInObjectGraph="2147483647" /> <serviceTimeouts transactionTimeout="05:05:00" /> <serviceThrottling maxConcurrentSessions="400" maxConcurrentInstances="400" maxConcurrentCalls="400"/> I hope somebody can help, I hate 5 steps forward 3 steps backward which always seems to be the case with WCF :P In the interim until I [hopefully] get a response I will now try reducing this array just to see if it's a sizing issue.. Ok, It seems I have bigger problems. Because the list was the only thing I was sending, I thought it was an array issue. I am even setting an int to "25" and it's coming back as 0 - Anybody? I know I must have done something obviously stupid.

    Read the article

  • How do I retrieve a list of base class objects without joins using NHibernate ICriteria?

    - by Kristoffer
    Let's say I have a base class called Pet and two subclasses Cat and Dog that inherit Pet. I simply map these to three tables Pet, Cat and Dog, where the Pet table contains the base class properties and the Cat and Dog tables contain a foreign key to the Pet table and any additional properties specific to a cat or dog. A joined subclass strategy. Now, using NHibernate and ICriteria, how can I get a list of "pure" Pet objects (not cats or dogs, just pets), without making any joins to the other tables?

    Read the article

  • Space partitioning when everything is moving

    - by Roy T.
    Background Together with a friend I'm working on a 2D game that is set in space. To make it as immersive and interactive as possible we want there to be thousands of objects freely floating around, some clustered together, others adrift in empty space. Challenge To unburden the rendering and physics engine we need to implement some sort of spatial partitioning. There are two challenges we have to overcome. The first challenge is that everything is moving so reconstructing/updating the data structure has to be extremely cheap since it will have to be done every frame. The second challenge is the distribution of objects, as said before there might be clusters of objects together and vast bits of empty space and to make it even worse there is no boundary to space. Existing technologies I've looked at existing techniques like BSP-Trees, QuadTrees, kd-Trees and even R-Trees but as far as I can tell these data structures aren't a perfect fit since updating a lot of objects that have moved to other cells is relatively expensive. What I've tried I made the decision that I need a data structure that is more geared toward rapid insertion/update than on giving back the least amount of possible hits given a query. For that purpose I made the cells implicit so each object, given it's position, can calculate in which cell(s) it should be. Then I use a HashMap that maps cell-coordinates to an ArrayList (the contents of the cell). This works fairly well since there is no memory lost on 'empty' cells and its easy to calculate which cells to inspect. However creating all those ArrayLists (worst case N) is expensive and so is growing the HashMap a lot of times (although that is slightly mitigated by giving it a large initial capacity). Problem OK so this works but still isn't very fast. Now I can try to micro-optimize the JAVA code. However I'm not expecting too much of that since the profiler tells me that most time is spent in creating all those objects that I use to store the cells. I'm hoping that there are some other tricks/algorithms out there that make this a lot faster so here is what my ideal data structure looks like: The number one priority is fast updating/reconstructing of the entire data structure Its less important to finely divide the objects into equally sized bins, we can draw a few extra objects and do a few extra collision checks if that means that updating is a little bit faster Memory is not really important (PC game)

    Read the article

  • User Profile objects are empty, even user logged-in properly?

    - by Ahmed
    I use asp:Login control, user can login properly, but while checking user Profile information within LoggedIn event of Login control, all of the fields in the Profile objects are empty. Also, User.Identity.IsAuthenticated always returns false. But, all of these issue solved while navigating to another page. Why User.Identity.IsAuthenticated returns false, even user logged-in properly? And, is there any way to get user's profile information within LoggedIn event of Login control?

    Read the article

  • Nested Model binding in ASP.NET MVC2. fields_for from rails equivalent

    - by dagda1
    Hi, I am looking for some examples of how to do model binding in ASP.NET MVC2 for COMPLEX objects. All the exmples I can find are of simple objects with no child collections or child objects. If I have an Expense object with a child ExpensePayment object. In rails, child objects are rendered with the HTML name attributes like this: expense[expense_payment][net] Rails uses fields_for to render child objects. How can I accomplish something similar in ASP.NET MVC2? Cheers Paul

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Domain Models (PHP)

    - by Calum Bulmer
    I have been programming in PHP for several years and have, in the past, adopted methods of my own to handle data within my applications. I have built my own MVC, in the past, and have a reasonable understanding of OOP within php but I know my implementation needs some serious work. In the past I have used an is-a relationship between a model and a database table. I now know after doing some research that this is not really the best way forward. As far as I understand it I should create models that don't really care about the underlying database (or whatever storage mechanism is to be used) but only care about their actions and their data. From this I have established that I can create models of lets say for example a Person an this person object could have some Children (human children) that are also Person objects held in an array (with addPerson and removePerson methods, accepting a Person object). I could then create a PersonMapper that I could use to get a Person with a specific 'id', or to save a Person. This could then lookup the relationship data in a lookup table and create the associated child objects for the Person that has been requested (if there are any) and likewise save the data in the lookup table on the save command. This is now pushing the limits to my knowledge..... What if I wanted to model a building with different levels and different rooms within those levels? What if I wanted to place some items in those rooms? Would I create a class for building, level, room and item with the following structure. building can have 1 or many level objects held in an array level can have 1 or many room objects held in an array room can have 1 or many item objects held in an array and mappers for each class with higher level mappers using the child mappers to populate the arrays (either on request of the top level object or lazy load on request) This seems to tightly couple the different objects albeit in one direction (ie. a floor does not need to be in a building but a building can have levels) Is this the correct way to go about things? Within the view I am wanting to show a building with an option to select a level and then show the level with an option to select a room etc.. but I may also want to show a tree like structure of items in the building and what level and room they are in. I hope this makes sense. I am just struggling with the concept of nesting objects within each other when the general concept of oop seems to be to separate things. If someone can help it would be really useful. Many thanks

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >