Search Results

Search found 11032 results on 442 pages for 'junior rails programmer'.

Page 286/442 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • Access database Need to prevent from approving overlapping OT.Second Try with modified request Not a programmer [on hold]

    - by user2512764
    Employees Signups on company Website for advance overtime line. Access table already has overtime signups which does not require user to add the time but it requires only to add location as approved. Since this table has field Employee name, Date, start time and End time and location, All the fields has the data except for location. In the data base I have created a form based on this table. Since the table already have most of the information User only has to add location in the form field in order to approve overtime. Once user approves an overtime line for example: User approves overtime for employee name 'John' which starts on 7/1/2013 at 0400-0800, location is successfully added. When user tries to add location for John again which might has the start time for 7/1/2013 at 0600=0900. Again we are not entering Start time, End time and date it is already in the table. we are only entering location as approval. Soon user enters the location for John in the form field, since there is a conflict with previously overtime line which has already been approved. program needs to check employee name, date and time in previously approved (Added location) overtime line and The location in current record needs to be deleted and go to next record. I hope I have explained it in understandable format. Thank You,

    Read the article

  • Heroku and Github integration (how to structure the project)

    - by Noah
    I'm creating a webservice and I want to store the source on github and run the app on heroku. I haven't seen my exact scenario addressed anywhere on the 'net so far, so I'll ask it here: I want to have the following directory structure: /project .git README <-- project readme file TODO.otl <-- project outline ... <-- other project-related stuff /my_rails_app app config ... README <-- rails' readme file In the above, project corresponds to http://github.com/myuser/project, and my_rails_app is the code that should be pushed to heroku. Do I need a separate branch for the rails app, or is there a simpler way that I'm missing? I guess my project-related non-rails files could live in my_rails_app, but the rails README already lives there and it seems inconsistent to overwrite that. However, if I leave it, my github page for the rails app will contain the rails readme, which makes no sense. Thanks, Noah P.S. I tried just setting it up as described above and running git push heroku from the main project folder. Of course, heroku doesn't know I want to deploy the subfolder: -----> Heroku receiving push ! Heroku push rejected, no Rails or Rack app detected.

    Read the article

  • jQuery AJAX request (Rails 3) gets redirected and returns empty message body (only with SSL)!

    - by elsurudo
    I'm trying to do a manual jQuery AJAX request the following way: $("#user_plan_id").change(function() { $("#plan_container").load('/plans/' + this.value); }); I have the "rails.js" file included in my header, and a "<%= csrf_meta_tag %". I see from my log that the request IS getting to the server (although without the authenticity token... does rails.js even do this?), but the response is a 302 (Found) rather than 200, and no data actually gets rendered. Any ideas? Edit: I now see that the first request redirects, and the proper partial gets rendered on the redirect. However, the 2nd response's body (on the client-side) is still empty. I'm guessing jQuery uses the first response and doesn't have a listener set up for the redirect. How do I get around this? Also, another note: the page doing the requesting is an HTTPS page. Here is what my log says: Started GET "/plans/221168073" for 127.0.0.1 at Tue Jun 15 01:24:06 -0400 2010 Processing by PlansController#show as HTML Parameters: {"id"=>"221168073"} DEPRECATION WARNING: Using #request_uri is deprecated. Use fullpath instead. (called from ensure_proper_protocol at /Users/ernestsurudo/Sites/vidfolia/vendor/plugins/ssl_requirement/lib/ssl_requirement.rb:57) Redirected to http://vidfolia.com/plans/221168073 Completed 302 Found in 1ms It turns out that if I turn off SSL requirement for that page, it works! I still have no idea why, though. So I suppose my question is: what is the workaround?

    Read the article

  • jQuery AJAX request (Rails 3) gets redirected and returns empty message body!

    - by elsurudo
    I'm trying to do a manual jQuery AJAX request the following way: $("#user_plan_id").change(function() { $("#plan_container").load('/plans/' + this.value); }); I have the "rails.js" file included in my header, and a "<%= csrf_meta_tag %". I see from my log that the request IS getting to the server (although without the authenticity token... does rails.js even do this?), but the response is a 302 (Found) rather than 200, and no data actually gets rendered. Any ideas? Edit: I now see that the first request redirects, and the proper partial gets rendered on the redirect. However, the 2nd response's body (on the client-side) is still empty. I'm guessing jQuery uses the first response and doesn't have a listener set up for the redirect. How do I get around this? Also, another note: the page doing the requesting is an HTTPS page. Here is what my log says: Started GET "/plans/221168073" for 127.0.0.1 at Tue Jun 15 01:24:06 -0400 2010 Processing by PlansController#show as HTML Parameters: {"id"=>"221168073"} DEPRECATION WARNING: Using #request_uri is deprecated. Use fullpath instead. (called from ensure_proper_protocol at /Users/ernestsurudo/Sites/vidfolia/vendor/plugins/ssl_requirement/lib/ssl_requirement.rb:57) Redirected to http://vidfolia.com/plans/221168073 Completed 302 Found in 1ms Perhaps it has something to do with the deprecation warning?

    Read the article

  • How to serve Rails application with Passenger/Apache without domain name?

    - by grifaton
    I am trying to serve a Rails application using Passenger and Apache on a Ubuntu server. The Passenger installation instructions say I should add the following to my Apache configuration file - I assume this is /etc/apache2/httpd.conf. <VirtualHost *:80> ServerName www.yourhost.com DocumentRoot /somewhere/public # <-- be sure to point to 'public'! <Directory /somewhere/public> AllowOverride all # <-- relax Apache security settings Options -MultiViews # <-- MultiViews must be turned off </Directory> </VirtualHost> However, I do not yet have a domain pointing at my server, so I'm not sure what I should put for the ServerName parameter. I have tried the IP address, but when I do that, restarting Apache gives apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sun Jan 17 12:49:26 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sun Jan 17 12:49:36 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results and pointing the browser at the IP address gives a 500 Internal Server Error. The closest I have got to something sensible is with <VirtualHost efate:80> ServerName efate DocumentRoot /root/jpf/public <Directory /root/jpf/public> AllowOverride all Options -MultiViews </Directory> </VirtualHost> where "efate" is my server's host name. But now pointing my browser at the server's IP address just gives a page saying "It works!" - presumably this is a default page, but I'm not sure where this is being served from. I might be wrong in thinking that the reason I have been unable to get this to work is related to not having a domain name. This is the first time I have used Apache directly - any help would be most gratefully received!

    Read the article

  • Unicorn 3.3.1 and Rack 1.1.0 issues?

    - by user41422
    I'm upgrading from the Ruby Enterprise Edition 1.8.6 to the latest 1.8.7 version with Unicorn to facilitate an upgrade to Rails 2.3.10, and am running into some issues. Should I uninstall the older versions of these gems? Here's the log messages: I'm upgrading from the Ruby Enterprise Edition 1.8.6 to the latest 1.8.7 version with Unicorn to facilitate an upgrade to Rails 2.3.10, and am running into some issues. Should I uninstall the older versions of these gems? I, [2011-02-02T22:06:16.328076 #30672] INFO -- : listening on addr=0.0.0.0:8080 fd=3 I, [2011-02-02T22:06:16.333137 #30672] INFO -- : Refreshing Gem list /srv/ree/bin/unicorn_rails must be run inside RAILS_ROOT: #<Gem::LoadError: can't activate rack (~> 1.1.0, runtime) for ["actionpack-2.3.10", "rails-2.3.10"], already activated rack-1.2.1 for ["unicorn-3.3.1"]> I, [2011-02-02T22:07:12.259436 #30701] INFO -- : listening on addr=0.0.0.0:8080 fd=3 I, [2011-02-02T22:07:12.259952 #30701] INFO -- : Refreshing Gem list /srv/ree/bin/unicorn_rails must be run inside RAILS_ROOT: #<Gem::LoadError: can't activate rack (~> 1.1.0, runtime) for ["actionpack-2.3.10", "rails-2.3.10"], already activated rack-1.2.1 for ["unicorn-3.3.1"]> I, [2011-02-02T22:09:27.787177 #30772] INFO -- : listening on addr=0.0.0.0:8080 fd=3 I, [2011-02-02T22:09:27.787691 #30772] INFO -- : Refreshing Gem list /srv/ree/bin/unicorn_rails must be run inside RAILS_ROOT: #<Gem::LoadError: can't activate rack (~> 1.1.0, runtime) for ["actionpack-2.3.10", "rails-2.3.10"], already activated rack-1.2.1 for ["unicorn-3.3.1"]> I, [2011-02-02T22:10:44.175407 #30846] INFO -- : listening on addr=0.0.0.0:8080 fd=3 I, [2011-02-02T22:10:44.175928 #30846] INFO -- : Refreshing Gem list /srv/ree/bin/unicorn_rails must be run inside RAILS_ROOT: #<Gem::LoadError: can't activate rack (~> 1.1.0, runtime) for ["actionpack-2.3.10", "rails-2.3.10"], already activated rack-1.2.1 for ["unicorn-3.3.1"]>

    Read the article

  • Does it help to be core programmer of a product (product meant for social good ) for getting into Ph. D. in top university in USA say top 20?

    - by Maddy.Shik
    Hey i am working upon a product as core developer which will be launched in USA market in few months if successful. Can this factor improve my chance to get Ph.D. in good university(say top 20 in US). Normally good universities like CMU, standford, MIT, Cornell are more interested in student's profile like research work, under graduate school etc. I am not passed out from very good university its ranked in top 20 of India only. Neither did i do research work till now. But being one of founding member of company and developing product for same, i want to know if this factor can help and to what extent. For university with ranking lower than 20 what matters most is GRE General score and GPA but i guess top university must be appreciating a person's real efforts.

    Read the article

  • How to change careers

    - by Jack Black
    For the past 4 years I have worked in c# doing web development. I have really enjoyed it, learnt a lot and have a worked hard to get to a position where I am earning good money and enjoy the work. However lately - I have wanted a change. What with the "native renaissance" I would like to change my career from being high level application and web development to more down to the metal native development. I haven't done any c or c++ since Uni over 4 years ago and so I have begun reading text books and websites to brush up. However - one major issue I have is that I have no practical experience with C++ and although I am brushing up on it, there will be a lot I don't know. Most of the jobs I have seen in native code around me all require native experience. The only positions I can find that don't explicitly ask for native experience are junior level positions. In my current role I am a mid level developer and although there would be a lot to learn in a c++ position, I wouldn't class myself as a junior. I guess my question is, how do people solve this issue when changing programming languages for their profession and / or how would you approach this hurdle? Like I said, I would really like to try out native development professionally but I wouldn't want to move back to a junior role. Would employers consider years of managed development and native hobby projects enough experience?

    Read the article

  • Pair Programming, for or against? [on hold]

    - by user1037729
    I believe it has many advantages over individual programming: Pros By pairing senior with relatively junior staff, the more junior can get up to speed with both project and computing experience, and the senior will re-think the problem in order to communicate with the junior, thus re-checking his own thinking (rubber duck principle!). At least 2 people will know about any single piece of work, if one person is away the other can cover, or if some one leaves a project knowledge transfer is easier. Two brains on a complex task is more effective, communication keeps the work free flowing and provides redundancy in decision making. Code is effectively reviewed as its being written, no need for a separate reviewing phase which requires a context switch as someone who has not been working on the piece in question would be required to understand and review the related code. Reviewing code on your own which you haven't written or architected is not fun, hence counter productive. Cons Less bandwith for performing tasks, lets say we have 4 devs, pair programming requires 2 devs per task, so we would be doing 2 tasks concurrently as a posed to 4. I believe this "Con" does not stand up as the pair programmed task would complete sooner and comes with a review built in for free! Ie the pair programming task would be more efficient and thus free up resources earlier. Less flexibility to chop and change tasks as two developers are tied into a task, when flexibility is required this could be a problem.

    Read the article

  • Error when logging in with Machinist in Shoulda test

    - by user303747
    I am having some trouble getting the right usage of Machinist and Shoulda in my testing. Here is my test: context "on POST method rating" do p = Product.make u = nil setup do u = login_as post :vote, :rating => 3, :id => p end should "set rating for product to 3" do assert_equal p.get_user_vote(u), 3 end And here's my blueprints: Sham.login { Faker::Internet.user_name } Sham.name { Faker::Lorem.words} Sham.email { Faker::Internet.email} Sham.body { Faker::Lorem.paragraphs(2)} User.blueprint do login password "testpass" password_confirmation { password } email end Product.blueprint do name {Sham.name} user {User.make} end And my authentication test helper: def login_as(u = nil) u ||= User.make() @controller.stubs(:current_user).returns(u) u end The error I get is: /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/validations.rb:1090:in `save_without_dirty!': Validation failed: Login has already been taken, Email has already been taken (ActiveRecord::RecordInvalid) from /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/dirty.rb:87:in `save_without_transactions!' from /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/transactions.rb:200:in `save!' from /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in `transaction' from /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/transactions.rb:182:in `transaction' from /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/transactions.rb:200:in `save!' from /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/transactions.rb:208:in `rollback_active_record_state!' from /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/transactions.rb:200:in `save!' from /usr/lib/ruby/gems/1.8/gems/machinist-1.0.6/lib/machinist/active_record.rb:55:in `make' from /home/jason/moderndarwin/test/blueprints.rb:37 from /usr/lib/ruby/gems/1.8/gems/machinist-1.0.6/lib/machinist.rb:77:in `generate_attribute_value' from /usr/lib/ruby/gems/1.8/gems/machinist-1.0.6/lib/machinist.rb:46:in `method_missing' from /home/jason/moderndarwin/test/blueprints.rb:37 from /usr/lib/ruby/gems/1.8/gems/machinist-1.0.6/lib/machinist.rb:20:in `instance_eval' from /usr/lib/ruby/gems/1.8/gems/machinist-1.0.6/lib/machinist.rb:20:in `run' from /usr/lib/ruby/gems/1.8/gems/machinist-1.0.6/lib/machinist/active_record.rb:53:in `make' from ./test/functional/products_controller_test.rb:25:in `__bind_1269805681_945912' from /home/jason/moderndarwin/vendor/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:293:in `call' from /home/jason/moderndarwin/vendor/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:293:in `merge_block' from /home/jason/moderndarwin/vendor/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:288:in `initialize' from /home/jason/moderndarwin/vendor/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:169:in `new' from /home/jason/moderndarwin/vendor/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:169:in `context' from ./test/functional/products_controller_test.rb:24 I can't figure out what it is I'm doing wrong... I have tested the login_as with my auth (Authlogic) in my user_controller testing. Any pointers in the right direction would be much appreciated!

    Read the article

  • How do I make an object a property of a model in Ruby on Rails?

    - by iTake
    I have this in my schema: create_table "robots_matches", :force => true do |t| t.integer "robot_id" t.integer "match_id" and I think I want to be able to load a robot and match from within my robots_match model so I can do something like this: robots_match.find(:id).get_robot().Name My attempt in the robots_matches model was this: def get_robot Robot.find(this.id) end I am super new to rails, so feel free to correct my architectural decision here.

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • Web standards or risk avoidance?

    - by Junior Dev
    My company is building an App Engine application. The app encounters a bug (possibly due to an issue with App Engine itself, as per our research) on IE9, but it cannot be reliably reproduced and is experienced by a small percentage of users. The workaround is to force IE9 to use IE8 mode. As a lazy front end developer (who doesn't like CSS hacks, shims and polyfills) I think it's OK to at least try going back to IE9 mode and see what happens, while we're still in private beta. The senior engineer (being more pragmatic) would rather that we continue forcing IE9 users to use the older IE8 mode. Who is right?

    Read the article

  • How disable mysql command in sudoers file?

    - by Carlos A. Junior
    How i can disable /usr/bin/mysql command in sudoers file ? ... Actually I've tryed use with this way: %tailonly ALL=!/usr/bin/mysql But when i'm access if user 'tailonly' of group 'tailonly', this command still enabled. In resume, i'm only want that 'tailonly' user access 'tail -f /usr/app/*.log' ... This is possible ? Edit: With this config, the user 'tailonly' still can access mysql terminal with 'mysql' command: $: sudo su $: visudo Cmnd_Alias MYSQL = /usr/bin/mysql Cmnd_Alias TAIL=/usr/bin/tail -f /jacad/jacad3/logs/*.log # User privilege specification root ALL=(ALL:ALL) ALL # Members of the admin group may gain root privileges %admin ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL %swa ALL=/etc/init.d/jacad3 stop %swa ALL=/etc/init.d/jacad3 start %swa ALL=/etc/init.d/jacad3 restart %swa ALL=sudoedit /jacad/jacad3/bin/jacad_start.sh %tailonly ALL=ALL,!MYSQL

    Read the article

  • Should tests be in the same ruby file or in separeted ruby files?

    - by Junior Mayhé
    While using Selenium and Ruby to do some functional tests, I am worried with the performance. So is it better to add all test methods in the same ruby file, or I should put each one in separated code files? Below a sample with all tests in the same file: # encoding: utf-8 require "selenium-webdriver" require "test/unit" class Tests < Test::Unit::TestCase def setup @driver = Selenium::WebDriver.for :firefox @base_url = "http://mysite" @driver.manage.timeouts.implicit_wait = 30 @verification_errors = [] @wait = Selenium::WebDriver::Wait.new :timeout => 10 end def teardown @driver.quit assert_equal [], @verification_errors end def element_present?(how, what) @driver.find_element(how, what) true rescue Selenium::WebDriver::Error::NoSuchElementError false end def verify(&blk) yield rescue Test::Unit::AssertionFailedError => ex @verification_errors << ex end def test_1 @driver.get(@base_url + "/") # a huge test here end def test_2 @driver.get(@base_url + "/") # a huge test here end def test_3 @driver.get(@base_url + "/") # a huge test here end def test_4 @driver.get(@base_url + "/") # a huge test here end def test_5 @driver.get(@base_url + "/") # a huge test here end end

    Read the article

  • Configuring the iPlanet as web tier for Oracle WebCenter Content (UCM)

    - by Adao Junior
    If you are looking for configure the iPlanet as Web server/proxy to use with the Oracle WebCenter Content, you probably won’t found an specific documentation for that or will found some old complex notes related to the old 10gR3. This post will help you out with few simple steps. That’s the diagram of the test scenario, considering that you will deploy in production in an cluster environment. First you need the software, for our scenario you will need: - Oracle iPlanet Web Server 7.0.15+ (Installed) - Oracle WebCenter Content 11gR1 PS5 (Installed) - Oracle WebLogic Web Server Plugins 11g (1.1) - Supported JDK (Using Oracle Java JDK 7u4 for the test) - Certified Client OS - Certified Server OS (Using Oracle Solaris 11 for the test) - Certified Database (Using Oracle Database 11.2.0.3 for the test) Then the configuration: - Download the latest plugin: http://www.oracle.com/technetwork/middleware/ias/downloads/wls-plugins-096117.html - Extract the WLSPlugin11g-iPlanet7.0 in some folder, like <iPlanet_Home>/plugins/wls11 - Include the plugin reference to the magnus.conf: If Unix (Solaris or Linux), include the line: Init fn="load-modules" shlib="/apps/oracle/WebServer7/plugins/wls11/lib/mod_wl.so" If Windows, Include the line:        Init fn="load-modules" shlib="D:\\oracle\\WebServer7\\plugins\\wls11\\lib\\mod_wl.dll" - Include the proxy reference to the obj.conf of each instance: <Object name="weblogic" ppath="*/cs/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object>   <Object name="weblogic" ppath="*/_dav/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object>   <Object name="weblogic" ppath="*/_ocsh/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object>   <Object name="weblogic" ppath="*/adfAuthentication/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object> If you are using an single node setup, change the Service fn=…. line to something like: Service fn="wl-proxy" WebLogicHost=<wcc-server> WebLogicPort=16200 With these configurations, your should have the WebCenter Content UI working with the iPlanet, test it. [http://<web-server>/cs/] With the UI working, the last step is to configure the WebDav: - Go to the iPlanet Admin Console (usually https://<web-server>:8989) - Go to Configurations >> [instance] >> Virtual Servers >> [Virtual Server] >> WebDAV: - Click New - Populate the URI with /cs/idcplg/webdav: - Select “Anyone (No Authentication)”, the wc Content will take care of the security: This will allow you to use the WebDav feature and the Desktop Integration Suite, including double-byte characters. Anothers iPlanet tunes could be done, I can cover in the next post related to the iPlanet. Cross-posted on the ContentrA.com Blog Related posts:  - Using a Web Proxy Server with WebCenter Family

    Read the article

  • LiveMeeting VC PowerShell PASS – Troubleshooting SQL Server with PowerShell

    - by Laerte Junior
    Guys, join me on Wednesday July 18th 12 noon EDT (GMT -4) for a presentation called Troubleshooting SQL Server With PowerShell. It will be in English, so please make allowances for this. I’m sure that you’re aware that my English is not perfect, but it is not so bad. I will do my best, you can be sure. The registration link will be available soon from PowerShell.sqlpass.org, so I hope to see you there. It will be a session without slides. Just code; pure PowerShell code. Trust me, We will see a lot of COOL stuff.Big thanks to Aaron Nelson (@sqlvariant) for the opportunity! Here are some more details about the presentation: “Troubleshooting SQL Server with PowerShell – The Next Level’ It is normal for us to have to face poorly performing queries or even complete failure in our SQL server environments. This can happen for a variety of reasons including poor Database Designs, hardware failure, improperly-configured systems and OS Updates applied without testing. As Database Administrators, we need to take precaution to minimize the impact of these problems when they occur, and so we need the tools and methodology required to identify and solve issues quickly. In this Session we will use PowerShell to explore some common troubleshooting techniques used in our day-to-day work as s DBA. This will include a variety of such activities including Gathering Performance Counters in several servers at the same time using background jobs, identifying Blocked Sessions and Reading & filtering the SQL Error Log even if the Instance is offline The approach will be using some advanced PowerShell techniques that allow us to scale the code for multiple servers and run the data collection in asynchronous mode.

    Read the article

  • Should tests be in the same Ruby file or in separated Ruby files?

    - by Junior Mayhé
    While using Selenium and Ruby to do some functional tests, I am worried with the performance. So is it better to add all test methods in the same Ruby file, or I should put each one in separated code files? Below a sample with all tests in the same file: # encoding: utf-8 require "selenium-webdriver" require "test/unit" class Tests < Test::Unit::TestCase def setup @driver = Selenium::WebDriver.for :firefox @base_url = "http://mysite" @driver.manage.timeouts.implicit_wait = 30 @verification_errors = [] @wait = Selenium::WebDriver::Wait.new :timeout => 10 end def teardown @driver.quit assert_equal [], @verification_errors end def element_present?(how, what) @driver.find_element(how, what) true rescue Selenium::WebDriver::Error::NoSuchElementError false end def verify(&blk) yield rescue Test::Unit::AssertionFailedError => ex @verification_errors << ex end def test_1 @driver.get(@base_url + "/") # a huge test here end def test_2 @driver.get(@base_url + "/") # a huge test here end def test_3 @driver.get(@base_url + "/") # a huge test here end def test_4 @driver.get(@base_url + "/") # a huge test here end def test_5 @driver.get(@base_url + "/") # a huge test here end end

    Read the article

  • Precise pangolin won't install

    - by Percival Júnior
    I've been an ubuntu user since first release but now I can't install precise beta 2 32bits and neither 64. Tried so many times, already download nighly isos but nothing. 11.10 installs smoothly (usint it). My hardware is a samsung notebook 4gb of ram.., 25 gb partition for precise. I'm gonna tell you what actually happens... I boot from usb driver and I choose my language (portuguese - brazilian) choose partition till there everything is normal. But when I Put my name user and password and give NEXT the screen to choose my region appears a click Next again and at this point I get STUCK !!! NOTHING HAPPENS. I've waited for hours and all the system freezes. When I try to reboot,, my grub is gone. Thanks for the help.

    Read the article

  • Need help to install persistent Ubuntu on USB drive

    - by Junior
    I am a new user of Ubuntu. I would like to install a persistent Ubuntu 11.04 to my USB stick, and it should be able to work as a guest OS running on Windows so that I can boot it on other computers other than the one which I used for the installation. I have used several creators such as unetbootin, however from my understanding it can only create Live Linux which I am unable to save my configurations and files. If it's possible I would like to bypass the BIOS, that is to say that I can just load from the virtual machine without having to restart the computer. Thanks in advance!

    Read the article

  • What&rsquo;s wrong with See[Mike]Code? (no relation)

    - by mbcrump
    I have been hearing a lot about the website See[Mike]Code. Basically, the site creates an interview url and a job candidate url and lets you see the potential programmer’s code (specifically .NET developer). Below is the candidate’s URL   Below is the interviewer url   So you might think, ah, this is a good thing. We can screen candidates cheaper and more efficiently. In reality, this is only a good thing if you want your programmer to develop using notepad.  I use the most efficient tools that exist to do my job. I would simply fire up VS2010 and type “for” and hit the tab key twice and get the following template.   I have no problem keeping MSDN/Google in one of my monitors. I spend time learning VS macros and using Aurora XAML/Expression to produce my XAML for WPF. Sure, I can write a for loop without using the VS Macro, but the real question is, “Why should I?”. My point being, if you really want to test a .NET programmer knowledge then fire up his native working environment and let him use the features of the IDE to develop the simple 10-line program. For a more sophisticated program, then give him 20 minutes and allow access to msdn/google. If the programmer cannot find at the right path then give him the boot.

    Read the article

  • Professionalism of online username / handle

    - by Thanatos
    I have in the past, and continue currently, used the handle "thanatos" on a lot of Internet sites, and if that isn't available (which happens ~50% of the time), "deathanatos". "Thanatos" is the name of the Greek god or personification of death (not to be confused with Hades, the Greek god of the underworld). "Dea" is a natural play-on-words to make the handle work in situations where the preferred handle has already been taken, without having to resort to numbers and remaining pronounceable. I adopted the handle many years ago — at the time, I was reading Edith Hamilton's Mythology, and Piers Anthony's On a Pale Horse, both still favorites of mine, and the name was born out of that. When I created the handle, I was fairly young, and valued privacy while online, not giving out my name. As I've become a more competent programmer, I'm starting to want to release some of my private works under FOSS licenses and such, and sometimes under my own name. This has started to tie this handle with my real name. I've become increasingly aware of my "web image" in the last few years, as I've been job hunting. As a programmer, I have a larger-than-average web presence, and I've started to wonder: Is this handle name professional? Does a handle name matter in a professional sense? Should I "rebrand"? (While one obviously wants to avoid hateful or otherwise distasteful names, is a topic such as "death" (to which my name is tied) proper? What could be frowned upon?) To try to make this a bit more programmer specific: Programmers are online — a lot — and some of us (and some who are not us) tend to put emphasis on a "web presence". I would argue that a prudent programmer (or anyone in an occupation that interacts online a lot) would be aware of their web presence. While not strictly limited to just programmers, for better or worse, it is a part of our world.

    Read the article

  • Aging vs. Coding Skills

    - by Renan Malke Stigliani
    A little background, since it can be part of my point fo view. I'm a C#/Java programmer with age of 23, coding since my 18's. I started studying C and working with Cobol, and after 1 year I quickly moved to C#/Java Web Development, and have worked with it in about 3/4 companies. (I've just moved again) In my (brief) professional career I encountered some older programmers, all the times it was very hard to work with them, since I was way better programmer than they. And it is not about just the language skills, some of them had seriously problems understanding basic logic. Now I wonder how theese programmer get jobs on the market since (I imagine) they have more expenses, and thus have to make more money, and are really counter-productives. In theese examples, others project member have to constantly keep stoping for helping them out. All the times, they eventually quit... So I wonder... May the aging process slow down the learning rate and logic thinking? Does the programmer has to, or at least should, move to a management area before getting old? Please, my intention is not to be disrespectful with older persons. I am fully aware that this is NOT the case of all older programmers, I often see around very good old programmers on the net, I just never met them for close.

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >