Search Results

Search found 22238 results on 890 pages for 'db security'.

Page 283/890 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • The question about the basics of LINQ to SQL

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer LINQ gets all customers from the database ("SELECT * FROM Customers"), moves them to the Customers array and then makes a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 && Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 && Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • perl dancer: passing database info to template

    - by Bubnoff
    Following Dancer tutorial here: http://search.cpan.org/dist/Dancer/lib/Dancer/Tutorial.pod I'm using my own sqlite3 database with this schema CREATE TABLE if not exists location (location_code TEXT PRIMARY KEY, name TEXT, stations INTEGER); CREATE TABLE if not exists session (id INTEGER PRIMARY KEY, date TEXT, sessions INTEGER, location_code TEXT, FOREIGN KEY(location_code) REFERENCES location(location_code)); My dancer code ( helloWorld.pm ) for the database: package helloWorld; use Dancer; use DBI; use File::Spec; use File::Slurp; use Template; our $VERSION = '0.1'; set 'template' => 'template_toolkit'; set 'logger' => 'console'; my $base_dir = qq(/home/automation/scripts/Area51/perl/dancer); # database crap sub connect_db { my $db = qw(/home/automation/scripts/Area51/perl/dancer/sessions.sqlite); my $dbh = DBI->connect("dbi:SQLite:dbname=$db", "", "", { RaiseError => 1, AutoCommit => 1 }); return $dbh; } sub init_db { my $db = connect_db(); my $file = qq($base_dir/schema.sql); my $schema = read_file($file); $db->do($schema) or die $db->errstr; } get '/' => sub { my $branch_code = qq(BPT); my $dbh = connect_db(); my $sql = q(SELECT * FROM session); my $sth = $dbh->prepare($sql) or die $dbh->errstr; $sth->execute or die $dbh->errstr; my $key_field = q(id); template 'show_entries.tt', { 'branch' => $branch_code, 'data' => $sth->fetchall_hashref($key_field), }; }; init_db(); true; Tried the example template on the site, doesn't work. <% FOREACH id IN data.keys.nsort %> <li>Date is: <% data.$id.sessions %> </li> <% END %> Produces page but with no data. How do I troubleshoot this as no clues come up in the console/cli? Thanks Bubnoff

    Read the article

  • CI pagination, POST problem

    - by Gwood
    Okay, I am pretty new in CI and I am stuck on pagination. I am performing this pagination on a record set that is result of a query. Now everything seems to be working fine. But there’s some problem probably with the link. I am displaying 10 results per page. Now if the results are less than 10 then it’s fine. Or If I pull up the entire records in the table it works fine. But in case the result is more than 10 rows, then the first 10 is perfectly displayed, and when I click on the pagination link to get to the next page the next page displays the rest of the results from the query as well as, other records in the table. ??? I am confused.. Any help?? Here’s the model code I am using .... function getTeesLike($field,$param) { $this-db-like($field,$param); $this-db-limit(10, $this-uri-segment(3)); $query=$this-db-get(‘shirt’); if($query-num_rows()0){ return $query-result_array(); } } function getNumTeesfromQ($field,$param) { $this-db-like($field,$param); $query=$this-db-get(‘shirt’); return $query-num_rows(); } And here’s the controller code .... $KW=$this-input-post(‘searchstr’); $this-load-library(‘pagination’); $config[‘base_url’]=‘http://localhost/cit/index.php/tees/show/’; $config[‘total_rows’]=$this-T-getNumTeesfromQ(‘Title’,$KW); $config[‘per_page’]=‘10’; $this-pagination-initialize($config); $data[‘tees’]=$this-T-getTeesLike(‘Title’,$KW); $data[‘title’]=‘Displaying Tees data’; $data[‘header’]=‘Tees List’; $data[‘links’]=$this-pagination-create_links(); $this-load-view(‘tee_res’, $data); //What am I doing wrong here ???? Pls help ... I guess the problem is with the $KW=$this-input-post(‘searchstr’); .. Because if I hard code a value for $KW it works fine. May be I should use POST differently ..but how do I pass the value from the form without POSTING it , its CI so not GET ... ??????

    Read the article

  • Help With LINQ: Mixed Joins and Specifying Default Values

    - by Corey O.
    I am trying to figure out how to do a mixed-join in LINQ with specific access to 2 LINQ objects. Here is an example of how the actual TSQL query might look: SELECT * FROM [User] AS [a] INNER JOIN [GroupUser] AS [b] ON [a].[UserID] = [b].[UserID] INNER JOIN [Group] AS [c] ON [b].[GroupID] = [c].[GroupID] LEFT JOIN [GroupEntries] AS [d] ON [a].[GroupID] = [d].[GroupID] WHERE [a].[UserID] = @UserID At the end, basically what I would like is an enumerable object full of GroupEntry objects. What am interested is the last two tables/objects in this query. I will be displaying Groups as a group header, and all of the Entries underneath their group heading. If there are no entries for a group, I still want to see that group as a header without any entries. Here's what I have so far: So from that I'd like to make a function: public void DisplayEntriesByUser(int user_id) { MyDataContext db = new MyDataContext(); IEnumberable<GroupEntries> entries = ( from user in db.Users where user.UserID == user_id join group_user in db.GroupUsers on user.UserID = group_user.UserID into a from join1 in a join group in db.Groups on join1.GroupID equals group.GroupID into b from join2 in b join entry in db.Entries.DefaultIfEmpty() on join2.GroupID equals entry.GroupID select entry ); Group last_group_id = 0; foreach(GroupEntry entry in entries) { if (last_group_id == 0 || entry.GroupID != last_group_id) { last_group_id = entry.GroupID; System.Console.WriteLine("---{0}---", entry.Group.GroupName.ToString().ToUpper()); } if (entry.EntryID) { System.Console.WriteLine(" {0}: {1}", entry.Title, entry.Text); } } } The example above does not work quite as expected. There are 2 problems that I have not been able to solve: I still seem to be getting an INNER JOIN instead of a LEFT JOIN on the last join. I am not getting any empty results, so groups without entries do not appear. I need to figure out a way so that I can fill in the default values for blank sets of entries. That is, if there is a group without an entry, I would like to have a mostly blank entry returned, except that I'd want the EntryID to be null or 0, the GroupID to be that of of the empty group that it represents, and I'd need a handle on the entry.Group object (i.e. it's parent, empty Group object). Any help on this would be greatly appreciated. Note: Table names and real-world representation were derived purely for this example, but their relations simplify what I'm trying to do.

    Read the article

  • openDatabase Hello World

    - by cf_PhillipSenn
    I'm trying to learn about openDatabase, and I think this I'm getting it to INSERT INTO TABLE1, but I can't verify that the SELECT * FROM TABLE1 is working. <html> <head> <script src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load("jquery", "1"); </script> <script type="text/javascript"> var db; $(function(){ db = openDatabase('HelloWorld'); db.transaction( function(transaction) { transaction.executeSql( 'CREATE TABLE IF NOT EXISTS Table1 ' + ' (TableID INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, ' + ' Field1 TEXT NOT NULL );' ); } ); db.transaction( function(transaction) { transaction.executeSql( 'SELECT * FROM Table1;',function (transaction, result) { for (var i=0; i < result.rows.length; i++) { alert('1'); $('body').append(result.rows.item(i)); } }, errorHandler ); } ); $('form').submit(function() { var xxx = $('#xxx').val(); db.transaction( function(transaction) { transaction.executeSql( 'INSERT INTO Table1 (Field1) VALUES (?);', [xxx], function(){ alert('Saved!'); }, errorHandler ); } ); return false; }); }); function errorHandler(transaction, error) { alert('Oops. Error was '+error.message+' (Code '+error.code+')'); transaction.executeSql('INSERT INTO errors (code, message) VALUES (?, ?);', [error.code, error.message]); return false; } </script> </head> <body> <form method="post"> <input name="xxx" id="xxx" /> <p> <input type="submit" name="OK" /> </p> <a href="http://www.google.com">Cancel</a> </form> </body> </html>

    Read the article

  • Having trouble doing an Update with a Linq to Sql object

    - by Pure.Krome
    Hi folks, i've got a simple linq to sql object. I grab it from the database and change a field then save. No rows have been updated. :( When I check the full Sql code that is sent over the wire, I notice that it does an update to the row, not via the primary key but on all the fields via the where clause. Is this normal? I would have thought that it would be easy to update the field(s) with the where clause linking on the Primary Key, instead of where'ing (is that a word :P) on each field. here's the code... using (MyDatabase db = new MyDatabase()) { var boardPost = (from bp in db.BoardPosts where bp.BoardPostId == boardPostId select bp).SingleOrDefault(); if (boardPost != null && boardPost.BoardPostId > 0) { boardPost.ListId = listId; // This changes the value from 0 to 'x' db.SubmitChanges(); } } and here's some sample sql.. exec sp_executesql N'UPDATE [dbo].[BoardPost] SET [ListId] = @p6 WHERE ([BoardPostId] = @p0) AND .... <snip the other fields>',N'@p0 int,@p1 int,@p2 nvarchar(9),@p3 nvarchar(10),@p4 int,@p5 datetime,@p6 int',@p0=1276,@p1=212787,@p2=N'ttreterte',@p3=N'ttreterte3',@p4=1,@p5='2009-09-25 12:32:12.7200000',@p6=72 Now, i know there's a datetime field in this update .. and when i checked the DB it's value was/is '2009-09-25 12:32:12.720' (less zero's, than above) .. so i'm not sure if that is messing up the where clause condition... but still! should it do a where clause on the PK's .. if anything .. for speed! Yes / no ? UPDATE After reading nitzmahone's reply, I then tried playing around with the optimistic concurrency on some values, and it still didn't work :( So then I started some new stuff ... with the optimistic concurrency happening, it includes a where clause on the field it's trying to update. When that happens, it doesn't work. so.. in the above sql, the where clause looks like this ... WHERE ([BoardPostId] = @p0) AND ([ListId] IS NULL) AND ... <rest snipped>) This doesn't sound right! the value in the DB is null, before i do the update. but when i add the ListId value to the where clause (or more to the point, when L2S add's it because of the optomistic concurrecy), it fails to find/match the row. wtf?

    Read the article

  • codeIgniter: pass parameter to a select query from previous query

    - by krike
    I'm creating a little management tool for the browser game travian. So I select all the villages from the database and I want to display some content that's unique to each of the villages. But in order to query for those unique details I need to pass the id of the village. How should I do this? this is my code (controller): function members_area() { global $site_title; $this->load->model('membership_model'); if($this->membership_model->get_villages()) { $data['rows'] = $this->membership_model->get_villages(); $id = 1;//this should be dynamic, but how? if($this->membership_model->get_tasks($id)): $data['tasks'] = $this->membership_model->get_tasks($id); endif; } $data['title'] = $site_title." | Your account"; $data['main_content'] = 'account'; $this->load->view('template', $data); } and this is the 2 functions I'm using in the model: function get_villages() { $q = $this->db->get('villages'); if($q->num_rows() > 0) { foreach ($q->result() as $row) { $data[] = $row; } return $data; } } function get_tasks($id) { $this->db->select('name'); $this->db->from('tasks'); $this->db->where('villageid', $id); $q = $this->db->get(); if($q->num_rows() > 0) { foreach ($q->result() as $task) { $data[] = $task; } return $data; } } and of course the view: <?php foreach($rows as $r) : ?> <div class="village"> <h3><?php echo $r->name; ?></h3> <ul> <?php foreach($tasks as $task): ?> <li><?php echo $task->name; ?></li> <?php endforeach; ?> </ul> <?php echo anchor('site/add_village/'.$r->id.'', '+ add new task'); ?> </div> <?php endforeach; ?> ps: please do not remove the comment in the first block of code!

    Read the article

  • MySQL Database Query - Codeigniter

    - by user2450349
    I am building an application with Codeigniter and need some help with a DB query. I have a table called users with the following fields: user_id, user_name, user_password, user_email, user_role, user_manager_id In my app, I pull all records from the user table using the following: function get_clients() { $this->db->select('*'); $this->db->where('user_role', 'client'); $this->db->order_by("user_name", "Asc"); $query = $this->db->get("users"); return $query->result_array(); } This works as expected, however when I display the results in the view, I also want to display a new column called Manager which will display the managers user_name field. The user_manager_id is the id of the user from the same table. Im guessing you can create an outer join on the same table but not sure. In the view, I am displaying the returned info as follows: <table class="table table-striped" id="zero-configuration"> <thead> <tr> <th>Name</th> <th>Email</th> <th>Manager</th> </tr> </thead> <tbody> <?php foreach($clients as $row) { ?> <tr> <td><?php echo $row['user_name']; ?> (<?php echo $row['user_username']; ?>)</td> <td><?php echo $row['user_email']; ?></td> <td><?php echo $row['???']; ?></td> </tr> <?php } ?> </tbody> </table> Any idea of how I can form the query and display the manager name is the view? Example: user_id user_name user_password user_email user_role user_manager_id 1 Ollie adjjk34jcd [email protected] client null 2 James djklsdfsdjk [email protected] client 1 When i query the database, i want to display results like this: Ollie [email protected] James [email protected] Ollie

    Read the article

  • Delete duplicate rows, do not preserve one row

    - by Radley
    I need a query that goes through each entry in a database, checks if a single value is duplicated elsewhere in the database, and if it is - deletes both entries (or all, if more than two). Problem is the entries are URLs, up to 255 characters, with no way of identifying the row. Some existing answers on Stackoverflow do not work for me due to performance limitations, or they use uniqueid which obviously won't work when dealing with a string. Long Version: I have two databases containing URLs (and only URLs). One database has around 3,000 urls and the other around 1,000. However, a large majority of the 1,000 urls were taken from the 3,000 url database. I need to merge the 1,000 into the 3,000 as new entries only. For this, I made a third database with combined URLs from both tables, about 4,000 entries. I need to find all duplicate entries in this database and delete them (Both of them, without leaving either). I have followed the query of a few examples on this site, but whenever I try to delete both entries it ends up deleting all the entries, or giving sql errors. Alternatively: I have two databases, each containing the separate database. I need to check each row from one database against the other to find any that aren't duplicates, and then add those to a third database. Edit: I've got my own PHP solution which is pretty hacky, but works. I cannot answer my own question for 8 hours because I'm new, so here it is for now: I went with a PHP script to accomplish this, as I'm more familiar with PHP than MySQL. This generates a simple list of urls that only exist in the target database, but not both. If you have more than 7,000 entries to parse this may take awhile, and you will need to copy/paste the results into a text file or expand the script to store them back into a database. I'm just doing it manually to save time. Note: Uses MeekroDB <pre> <?php require('meekrodb.2.1.class.php'); DB::$user = 'root'; DB::$password = ''; DB::$dbName = 'testdb'; $all = DB::query('SELECT * FROM old_urls LIMIT 7000'); foreach($all as $row) { $test = DB::query('SELECT url FROM new_urls WHERE url=%s', $row['url']); if (!is_array($test)) { echo $row['url'] . "\n"; }else{ if (count($test) == 0) { echo $row['url'] . "\n"; } } } ?> </pre>

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • conflict in debian packages

    - by Alaa Alomari
    I have Debian 4 server (i know it is very old) cat /etc/issue Debian GNU/Linux 4.0 \n \l I have the following in /etc/apt/sources.list deb http://debian.uchicago.edu/debian/ stable main deb http://ftp.debian.org/debian/ stable main deb-src http://ftp.debian.org/debian/ stable main deb http://security.debian.org/ stable/updates main apt-get upgrade Reading package lists... Done Building dependency tree... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies. libt1-5: Depends: libc6 (= 2.7) but 2.3.6.ds1-13etch10+b1 is installed locales: Depends: glibc-2.11-1 but it is not installable E: Unmet dependencies. Try using -f. Now it shows that i have Debian 6!! cat /etc/issue Debian GNU/Linux 6.0 \n \l EDIT I have tried apt-get update Get: 1 http://debian.uchicago.edu stable Release.gpg [1672B] Hit http://debian.uchicago.edu stable Release Ign http://debian.uchicago.edu stable/main Packages/DiffIndex Hit http://debian.uchicago.edu stable/main Packages Get: 2 http://security.debian.org stable/updates Release.gpg [836B] Hit http://security.debian.org stable/updates Release Get: 3 http://ftp.debian.org stable Release.gpg [1672B] Ign http://security.debian.org stable/updates/main Packages/DiffIndex Hit http://security.debian.org stable/updates/main Packages Hit http://ftp.debian.org stable Release Ign http://ftp.debian.org stable/main Packages/DiffIndex Ign http://ftp.debian.org stable/main Sources/DiffIndex Hit http://ftp.debian.org stable/main Packages Hit http://ftp.debian.org stable/main Sources Fetched 3B in 0s (3B/s) Reading package lists... Done apt-get dist-upgrade Reading package lists... Done Building dependency tree... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies. libt1-5: Depends: libc6 (= 2.7) but 2.3.6.ds1-13etch10+b1 is installed locales: Depends: glibc-2.11-1 E: Unmet dependencies. Try using -f. apt-get -f install Reading package lists... Done Building dependency tree... Done Correcting dependencies...Done The following extra packages will be installed: gcc-4.4-base libbsd-dev libbsd0 libc-bin libc-dev-bin libc6 Suggested packages: glibc-doc Recommended packages: libc6-i686 The following packages will be REMOVED libc6-dev libedit-dev libexpat1-dev libgcrypt11-dev libjpeg62-dev libmcal0-dev libmhash-dev libncurses5-dev libpam0g-dev libsablot0-dev libtool libttf-dev The following NEW packages will be installed gcc-4.4-base libbsd-dev libbsd0 libc-bin libc-dev-bin The following packages will be upgraded: libc6 1 upgraded, 5 newly installed, 12 to remove and 349 not upgraded. 7 not fully installed or removed. Need to get 0B/5050kB of archives. After unpacking 23.1MB disk space will be freed. Do you want to continue [Y/n]? y Preconfiguring packages ... dpkg: regarding .../libc-bin_2.11.3-2_i386.deb containing libc-bin: package uses Breaks; not supported in this dpkg dpkg: error processing /var/cache/apt/archives/libc-bin_2.11.3-2_i386.deb (--unpack): unsupported dependency problem - not installing libc-bin Errors were encountered while processing: /var/cache/apt/archives/libc-bin_2.11.3-2_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Now: it seems there is a conflict!! how can i fix it? and is it true that the server has became debian 6!!?? Thanks for your help

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • Trying to connect phpMyAdmin to remote mySQL server ( 2002: can't connect )

    - by Malcolm Jones
    Trying to get phpMyAdmin to talk to a remote mySQL server. The config is below and there is already a user set up in mySQL DB to be able to log in from the specified host that PMA sits on. Hosting is provided by Rackspace (Rightscale) and both cloud servers behind the same firewall. [config.inc.php] <?php $cfg['blowfish_secret'] = ''; $i = 0; $i++; $cfg['Servers'][$i]['host'] = 'XX.XX.XX.XX'; // MySQL hostname or IP address $cfg['Servers'][$i]['port'] = ''; // MySQL port - leave blank for default port $cfg['Servers'][$i]['socket'] = ''; // Path to the socket - leave blank for default socket $cfg['Servers'][$i]['connect_type'] = 'tcp'; // How to connect to MySQL server ('tcp' or 'socket') $cfg['Servers'][$i]['extension'] = 'mysql'; // The php MySQL extension to use ('mysql' or 'mysqli') $cfg['Servers'][$i]['compress'] = FALSE; // Use compressed protocol for the MySQL connection // (requires PHP >= 4.3.0) $cfg['Servers'][$i]['controluser'] = ''; // MySQL control user settings // (this user must have read-only $cfg['Servers'][$i]['controlpass'] = ''; // access to the "mysql/user" // and "mysql/db" tables). // The controluser is also // used for all relational // features (pmadb) $cfg['Servers'][$i]['auth_type'] = 'config'; // Authentication method (config, http or cookie based)? $cfg['Servers'][$i]['user'] = 'USERNAME'; // MySQL user $cfg['Servers'][$i]['password'] = 'PASSWORD'; // MySQL password (only needed // with 'config' auth_type) $cfg['Servers'][$i]['only_db'] = ''; // If set to a db-name, only // this db is displayed in left frame // It may also be an array of db-names, where sorting order is relevant. $cfg['Servers'][$i]['hide_db'] = ''; // Database name to be hidden from listings $cfg['Servers'][$i]['verbose'] = ''; // Verbose name for this host - leave blank to show the hostname $cfg['Servers'][$i]['pmadb'] = ''; // Database used for Relation, Bookmark and PDF Features // (see scripts/create_tables.sql) // - leave blank for no support // DEFAULT: 'phpmyadmin' $cfg['Servers'][$i]['bookmarktable'] = ''; // Bookmark table // - leave blank for no bookmark support // DEFAULT: 'pma_bookmark' $cfg['Servers'][$i]['relation'] = ''; // table to describe the relation between links (see doc) // - leave blank for no relation-links support // DEFAULT: 'pma_relation' $cfg['Servers'][$i]['table_info'] = ''; // table to describe the display fields // - leave blank for no display fields support // DEFAULT: 'pma_table_info' $cfg['Servers'][$i]['table_coords'] = ''; // table to describe the tables position for the PDF schema // - leave blank for no PDF schema support // DEFAULT: 'pma_table_coords' $cfg['Servers'][$i]['pdf_pages'] = ''; // table to describe pages of relationpdf // - leave blank if you don't want to use this // DEFAULT: 'pma_pdf_pages' $cfg['Servers'][$i]['column_info'] = ''; // table to store column information // - leave blank for no column comments/mime types // DEFAULT: 'pma_column_info' $cfg['Servers'][$i]['history'] = ''; // table to store SQL history // - leave blank for no SQL query history // DEFAULT: 'pma_history' $cfg['Servers'][$i]['verbose_check'] = TRUE; // set to FALSE if you know that your pma_* tables // are up to date. This prevents compatibility // checks and thereby increases performance. $cfg['Servers'][$i]['AllowRoot'] = TRUE; // whether to allow root login $cfg['Servers'][$i]['AllowDeny']['order'] // Host authentication order, leave blank to not use = ''; $cfg['Servers'][$i]['AllowDeny']['rules'] // Host authentication rules, leave blank for defaults = array(); Please let me know if you need anymore info. -- Malcolm

    Read the article

  • Fake ISAPI Handler to serve static files with extention that are rewritted by url rewriter

    - by developerit
    Introduction I often map html extention to the asp.net dll in order to use url rewritter with .html extentions. Recently, in the new version of www.nouvelair.ca, we renamed all urls to end with .html. This works great, but failed when we used FCK Editor. Static html files would not get serve because we mapped the html extension to the .NET Framework. We can we do to to use .html extension with our rewritter but still want to use IIS behavior with static html files. Analysis I thought that this could be resolve with a simple HTTP handler. We would map urls of static files in our rewriter to this handler that would read the static file and serve it, just as IIS would do. Implementation This is how I coded the class. Note that this may not be bullet proof. I only tested it once and I am sure that the logic behind IIS is more complicated that this. If you find errors or think of possible improvements, let me know. Imports System.Web Imports System.Web.Services ' Author: Nicolas Brassard ' For: Solutions Nitriques inc. http://www.nitriques.com ' Date Created: April 18, 2009 ' Last Modified: April 18, 2009 ' License: CPOL (http://www.codeproject.com/info/cpol10.aspx) ' Files: ISAPIDotNetHandler.ashx ' ISAPIDotNetHandler.ashx.vb ' Class: ISAPIDotNetHandler ' Description: Fake ISAPI handler to serve static files. ' Usefull when you want to serve static file that has a rewrited extention. ' Example: It often map html extention to the asp.net dll in order to use url rewritter with .html. ' If you want to still serve static html file, add a rewritter rule to redirect html files to this handler Public Class ISAPIDotNetHandler Implements System.Web.IHttpHandler Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest ' Since we are doing the job IIS normally does with html files, ' we set the content type to match html. ' You may want to customize this with your own logic, if you want to serve ' txt or xml or any other text file context.Response.ContentType = "text/html" ' We begin a try here. Any error that occurs will result in a 404 Page Not Found error. ' We replicate the behavior of IIS when it doesn't find the correspoding file. Try ' Declare a local variable containing the value of the query string Dim uri As String = context.Request("fileUri") ' If the value in the query string is null, ' throw an error to generate a 404 If String.IsNullOrEmpty(uri) Then Throw New ApplicationException("No fileUri") End If ' If the value in the query string doesn't end with .html, then block the acces ' This is a HUGE security hole since it could permit full read access to .aspx, .config, etc. If Not uri.ToLower.EndsWith(".html") Then ' throw an error to generate a 404 Throw New ApplicationException("Extention not allowed") End If ' Map the file on the server. ' If the file doesn't exists on the server, it will throw an exception and generate a 404. Dim fullPath As String = context.Server.MapPath(uri) ' Read the actual file Dim stream As IO.StreamReader = FileIO.FileSystem.OpenTextFileReader(fullPath) ' Write the file into the response context.Response.Output.Write(stream.ReadToEnd) ' Close and Dipose the stream stream.Close() stream.Dispose() stream = Nothing Catch ex As Exception ' Set the Status Code of the response context.Response.StatusCode = 404 'Page not found ' For testing and bebugging only ! This may cause a security leak ' context.Response.Output.Write(ex.Message) Finally ' In all cases, flush and end the response context.Response.Flush() context.Response.End() End Try End Sub ' Automaticly generated by Visual Studio ReadOnly Property IsReusable() As Boolean Implements IHttpHandler.IsReusable Get Return False End Get End Property End Class Conclusion As you see, with our static files map to this handler using query string (ex.: /ISAPIDotNetHandler.ashx?fileUri=index.html) you will have the same behavior as if you ask for the uri /index.html. Finally, test this only in IIS with the html extension map to aspnet_isapi.dll. Url rewritting will work in Casini (Internal Web Server shipped with Visual Studio) but it’s not the same as with IIS since EVERY request is handle by .NET. Versions First release

    Read the article

  • OAM OVD integration - Error Encounterd while performance test "LDAP response read timed out, timeout used:2000ms"

    - by siddhartha_sinha
    While working on OAM OVD integration for one of my client, I have been involved in the performance test of the products wherein I encountered OAM authentication failures while talking to OVD during heavy load. OAM logs revealed the following: oracle.security.am.common.policy.common.response.ResponseException: oracle.security.am.engines.common.identity.provider.exceptions.IdentityProviderException: OAMSSA-20012: Exception in getting user attributes for user : dummy_user1, idstore MyIdentityStore with exception javax.naming.NamingException: LDAP response read timed out, timeout used:2000ms.; remaining name 'ou=people,dc=oracle,dc=com' at oracle.security.am.common.policy.common.response.IdentityValueProvider.getUserAttribute(IdentityValueProvider.java:271) ... During the authentication and authorization process, OAM complains that the LDAP repository is taking too long to return user attributes.The default value is 2 seconds as can be seen from the exception, "2000ms". While troubleshooting the issue, it was found that we can increase the ldap read timeout in oam-config.xml.  For reference, the attribute to add in the oam-config.xml file is: <Setting Name="LdapReadTimeout" Type="xsd:string">2000</Setting> However it is not recommended to increase the time out unless it is absolutely necessary and ensure that back-end directory servers are working fine. Rather I took the path of tuning OVD in the following manner: 1) Navigate to ORACLE_INSTANCE/config/OPMN/opmn folder and edit opmn.xml. Search for <data id="java-options" ………> and edit the contents of the file with the highlighted items: <category id="start-options"><data id="java-bin" value="$ORACLE_HOME/jdk/bin/java"/><data id="java-options" value="-server -Xms1024m -Xmx1024m -Dvde.soTimeoutBackend=0 -Didm.oracle.home=$ORACLE_HOME -Dcommon.components.home=$ORACLE_HOME/../oracle_common -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/opt/bea/Middleware/asinst_1/diagnostics/logs/OVD/ovd1/ovdGClog.log -XX:+UseConcMarkSweepGC -Doracle.security.jps.config=$ORACLE_INSTANCE/config/JPS/jps-config-jse.xml"/><data id="java-classpath" value="$ORACLE_HOME/ovd/jlib/vde.jar$:$ORACLE_HOME/jdbc/lib/ojdbc6.jar"/></category></module-data><stop timeout="120"/><ping interval="60"/></process-type> When the system is busy, a ping from the Oracle Process Manager and Notification Server (OPMN) to Oracle Virtual Directory may fail. As a result, OPMN will restart Oracle Virtual Directory after 20 seconds (the default ping interval). To avoid this, consider increasing the ping interval to 60 seconds or more. 2) Navigate to ORACLE_INSTANCE/config/OVD/ovd1 folder.Open listeners.os_xml file and perform the following changes: · Search for <ldap id=”Ldap Endpoint”…….> and point the cursor to that line. · Change threads count to 200. · Change anonymous bind to Deny. · Change workQueueCapacity to 8096. Add a new parameter <useNIO> and set its value to false viz: <useNIO>false</useNio> Snippet: <ldap version="8" id="LDAP Endpoint"> ....... .......  <socketOptions><backlog>128</backlog>         <reuseAddress>false</reuseAddress>         <keepAlive>false</keepAlive>         <tcpNoDelay>true</tcpNoDelay>         <readTimeout>0</readTimeout>      </socketOptions> <useNIO>false</useNIO></ldap> Restart OVD server. For more information on OVD tuneup refer to http://docs.oracle.com/cd/E25054_01/core.1111/e10108/ovd.htm. Please Note: There were few patches released from OAM side for performance tune-up as well. Will provide the updates shortly !!!

    Read the article

  • JMX Based Monitoring - Part Three - Web App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I showed a technique for integrating a JMX console with Oracle WebLogic which is a standard feature of Oracle WebLogic 11g. Customers on other Web Application servers and other versions of Oracle WebLogic can refer to the documentation provided with the server to do a similar thing. In this blog entry I am going to discuss a new feature that is only present in Oracle Utilities Application Framework 4 and above that allows JMX to be used for management and monitoring the Oracle Utilities Web Applications. In this case JMX can be used to perform monitoring as well as provide the management of the cache. In Oracle Utilities Application Framework you can enable Web Application Server JMX monitoring that is unique to the framework by specifying a JMX port number in RMI Port number for JMX Web setting and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6740 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6740/oracle/ouaf/webAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.base.support.management.mbean.JVMInfo=enabled ouaf.jmx.com.splwg.base.web.mbeans.FlushBean=enabled ouaf.jmx.* files - contain the userid and password. The default setup uses the JMX default security configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see the JMX Site. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes, I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.management.connnector.url.default entry and the credentials you specified in the jmx.remote.x.* files. Remember these are encrypted by default so if you try and view the file you may be able to decipher it visually. For example: There are three Mbeans available to you: flushBean - This is a JMX replacement for the jsp versions of the flush utilities provided in previous releases of the Oracle Utilities Application Framework. You can manage the cache using the provided operations from JMX. The jsp versions of the flush utilities are still provided, for backward compatibility, but now are authorization controlled. JVMInfo - This is a JMX replacement for the jsp version of the JVMInfo screen used by support to get a handle on JVM information. This information is environmental not operational and is used for support purposes. The jsp versions of the JVMInfo utilities are still provided, for backward compatibility, but now is also authorization controlled. JVMSystem - This is an implementation of the Java system MXBeans for use in monitoring. We provide our own implementation of the base Mbeans to save on creating another JMX configuration for internal monitoring and to provide a consistent interface across platforms for the MXBeans. This Mbean is disabled by default and can be enabled using the enableJVMSystemBeans operation. This Mbean allows for the monitoring of the ClassLoading, Memory, OperatingSystem, Runtime and the Thread MX beans. Refer to the Server Administration Guides provided with your product and the Technical Best Practices Whitepaper for information about individual statistics. The Web Application Server JMX monitoring allows greater visibility for monitoring and management of the Oracle Utilities Application Framework application from jconsole or any JSR160 compliant JMX browser or JMX console.

    Read the article

  • Google Chrome OS

    - by Piet
    It’s about time someone took this initiative: Google Chrome OS I especially like the following: Speed, simplicity and security are the key aspects of Google Chrome OS. We’re designing the OS to be fast and lightweight, to start up and get you onto the web in a few seconds. The user interface is minimal to stay out of your way, and most of the user experience takes place on the web. And as we did for the Google Chrome browser, we are going back to the basics and completely redesigning the underlying security architecture of the OS so that users don’t have to deal with viruses, malware and security updates. It should just work. I recently had the ‘pleasure’ witnessing several 60+ yr old friends and family (all respect for everyone in their 2nd or 3rd youth) buying their first pc, taking their first steps using a pc and the net. Have you ever seen the gazillions of little ‘useful’ tools that are installed on a new standard Vista pc or laptop ? This is like learning to drive a new car and being placed in an airplane cockpit. And all the messages one gets about virus/security checks, fingerprint nog being enrolled, trial period expiring (because half those really useful tools come with a trial period), … If I was in their shoes, being confronted with this as a total newbie, I guess I’d just give up pretty soon. As a matter of fact, I actually gave up on Vista on my work-laptop, it was driving me crazy. Thank god I was allowed to install XP. I’m a Linux user at home, and Vista was such a frustrating experience that Windows XP actually felt like breath of fresh air. And what are those people using? Email, browsing…. and maybe writing a little letter now and then or storing their photo’s if they have a digital camera. Actually (side note), I get the impression that hearing about facebook is a major motivator for the digital newbies to finally take the plunge, buy a pc and get on-line. And OK, we’ve seen initiatives like this before, but Google is a brand everyone knows… unlike Ubuntu, Debain or Mandriva. Google = God. If I was Microsoft I’d be wetting my pants knowing Google was about to release their own OS, without a doubt fully optimized to use their own on-line office suit. On the other hand, the old adage ‘no one ever got fired for choosing Microsoft’ still holds a lot of truth. I hope I’ll be able to give it a big thumbs up if a would-be pc-user asks me what kind of pc/OS they should go for in the near future. On the other hand, if I’d do that, I’m pretty sure a couple of weeks later I’d get a call asking how to install this game or photo editing tool they got from one of their Windows using friends… or that nifty photo-printer they just bought. But then, I also get those questions now from newbie Windows users. It takes a couple of years before Newbie pc users understand that some things just don’t work and aren’t worth the time trying to fix them. I’d just wish they’d go back to the shop when something doesn’t work. You also don’t let you mechanic friend try to fix a problem with your brand new car. But that’s another story… Wait and see…

    Read the article

  • [MISC GEEKERY] Support for Some Versions of Windows is Ending

    - by Matthew Guay
    Are you sticking with your older version of Windows instead of upgrading to Windows 7?  There’s no problem with that, but here’s a quick reminder to make sure you’re running the latest service pack to stay protected. Microsoft offers security updates and more throughout the lifetime of a version of Windows, and periodically they roll all the latest updates and improvements together into a service pack.  After a while, only computers running the latest service pack will still get updates to keep them safe. Recently, Microsoft has been warning that support is ending for Windows XP with Service Pack 2 and the release version of Windows Vista.  When support ends, you will not receive any new security updates for Windows.  You can continue to use your computer the same as before, but it may not be as secure and if new security issues are discovered they will not be updated. However, it’s easy to stay supported: simply install XP Service Pack 3 or Vista Service Pack 2, depending on your computer.  Here’s how to do that: Windows XP To install Windows XP Service Pack 3, you can either check Windows Update for updates, or simply download it from Microsoft at this link: Download XP Service Pack 3 Run the download (or if you’re updating from Windows Update the installer will automatically launch), and proceed just as you normally would when installing a program.  Your computer will have to reboot during the install, so make sure you’ve saved all your work and closed other programs before installing.   To check what service pack your computer is running, click Start, then right-click on the My Computer button and choose Properties. This will show you what version and service pack of Windows you are running, and in this screenshot we see this computer has be updated to Service Pack 3. Please Note:  The version of XP shipped with Windows XP Mode in Windows 7 comes preconfigured with Service Pack 3, and does not need updated.  Additionally, if your computer is running the 64 bit version of Windows XP, then Service Pack 2 is the latest service pack for your computer, and it is still supported. Windows Vista If your computer is running Windows Vista, you can install Service Pack 2 to stay up to date and supported.  Simply check Windows Update for Service Pack 2 if you haven’t installed it yet, or download the installer for your computer from the link below: 32 bit: Vista Service Pack 2 32-bit 64 bit: Vista Service Pack 2 64-bit Run the installer, and simply set it up as a normal program installation.  Do note that your computer will reboot during the installation, so make sure to save your work and close other programs before installing. To see what service pack your computer is running, click the Start orb, then right-click on the Computer button and select Properties. This will show what service pack and edition of Windows Vista your computer is running right at the top of the page. Conclusion Microsoft makes it easy to keep using your computer safely and securely even if you choose to keep using your older version of Windows.  By installing the latest service pack, you will make sure that your computer will be supported for years to come.  Windows 7 users, you don’t need to worry; no service has been released for it yet.  Stay tuned, and we’ll let you know when any new service packs are available. www.microsoft.com/EOS – End of Support Information from Microsoft Similar Articles Productive Geek Tips Remove Optional and Probably Unnecessary Windows Vista ComponentsRequesting Hotfixes from Microsoft the Easy WayUnderstanding Windows Vista Aero Glass RequirementsAdd Network Support to Windows Live MovieMakerCustomize the Manufacturer Support Info in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7?

    Read the article

  • Today's Links (6/29/2011)

    - by Bob Rhubart
    Event-Driven SOA: Events meet Services | Guido Schmutz Oracle ACE Director Guido Schmutz shows you how to achieve extreme loose coupling within a Service-Oriented Architecture by using event-driven interactions. Misconceptions About Software Architecture | Sanjeev Kumar A concise, to-the-point, and informative article by Sanjeev Kumar. Good Leaders Acknowledge What Can't Be Done - Jeffrey Pfeffer - Harvard Business Review "None of us likes to admit to bad decisions," says Jeffrey Pfeffer. "But imagine how much harder that is for someone who has been chosen to lead a large organization precisely because he or she is thought to have the power to see the future more clearly and chart a wise course." Suboptimal Thinking within Enterprise Architecture | James McGovern McGovern says: "We need to remember that enterprises live and thrive beyond just the current person at the helm." Boundaryless Information Flow | Richard Veryard "If all the boundaries are removed or porous, then the (extended) enterprise or ecosystem becomes like a giant sponge, in which all information permeates the whole," Veryard says. "Some people may think that's a good idea, but it's not what I'd call loose coupling." Coming to a City Near You: Oracle Business Analytics Summits | Rob Reynolds This series of events includes a Technology and Architecture track. New Date for Implementation of Sun Hands-On Course Requirement (Oracle Certification) As announced on the Oracle Certification website, Java Architect, Java Developer, Solaris System Administrator and Solaris Security Administrator certification tracks will include a new mandatory course attendance requirement. VirtualBox 4.0.10 is now available for download | Bob Netherton Netherton shares information on the new release. Updated Technical Best Practices whitepaper | Anthony Shorten The Technical Best Practices whitepaper has been updated with the latest advice. "New advice includes new installation advice, advanced settings, new security settings and advice for both Oracle WebLogic and IBM WebSphere installations," says Shorten. Kscope 11 ADF, AIA and Business Rules | Peter Paul van de Beek Whitehorses Solution Architect Peter Paul van de Beek shares his impressions of KScope11 presentations by Markus Eisele, Sten Vesterli, and Edwin Biemond. Amazon AWS for the learning experience | Andrej Koelewijn "Using AWS changes your expectations how your internal data center should operate," says Koelewijn. BPMN is dead, long live BPEL! (SOA Partner Community Blog) Jürgen Kress shares information -- including a long list of speakers -- for the SOA & BPM Integration Days 2011 conference, October 12th & 13th 2011 in Düsseldorf. InfoQ: HTML5 and the Dawn of Rich Mobile Web Applications James Pearce introduces cross-platform web apps development using HTML5 and web frameworks, such as jQTouch, jQuery Mobile, Sencha Touch, PhoneGap, outlining what makes a good framework. InfoQ: Interview and Book Excerpt: CMMI for Development "Frameworks like TOGAF are used to define an architecture that aligns IT assets and resources to support key business needs and processes of key stakeholders," says SEI's Mike Konrad. "But the individual application systems, capabilities, services, networks, and other IT assets and infrastructure still need to be acquired, developed, or sustained." InfoQ: Architecting a Cloud-Scale Identity Fabric | Eric Olden "The most cited reason for not moving to the cloud is concern about security," says Olden. "In particular, managing user identity and access in the cloud is a tough problem to solve and a big security concern for organizations."

    Read the article

  • Juju stuck in "pending" state when using LXC

    - by Andre
    So I'm trying to get started with Juju, and tried to do this locally using LXC. I followed the instructions here: How do I configure juju for local usage? Unfortunately this doesn't seem to work for me. status shows the following: $ juju status machines: 0: agent-state: running dns-name: localhost instance-id: local instance-state: running services: mysql: charm: cs:precise/mysql-1 relations: db: - wordpress units: mysql/0: agent-state: pending machine: 0 public-address: null wordpress: charm: cs:precise/wordpress-0 exposed: true relations: db: - mysql units: wordpress/0: agent-state: pending machine: 0 open-ports: [] public-address: null 2012-05-10 14:09:38,155 INFO 'status' command finished successfully As you can see the agent-state is 'pending' and there is no public address where I'm able to access the newly created site. Am I missing something here? UPDATE: Tried destroying the environment an doing everything again (multiple times). This is the output for debug-log: ~$ juju debug-log 2012-05-11 08:50:23,790 INFO Enabling distributed debug log. 2012-05-11 08:50:23,806 INFO Tailing logs - Ctrl-C to stop. 2012-05-11 08:50:42,338 Machine:0: juju.agents.machine DEBUG: Units changed old:set([]) new:set(['mysql/0']) 2012-05-11 08:50:42,339 Machine:0: juju.agents.machine DEBUG: Starting service unit: mysql/0 ... 2012-05-11 08:50:42,459 Machine:0: unit.deploy DEBUG: Downloading charm cs:precise/mysql-1 to /home/andre/.juju/data/andre-local/charms 2012-05-11 08:50:42,620 Machine:0: unit.deploy DEBUG: Using <juju.machine.unit.UnitContainerDeployment object at 0x9c54b6c> for mysql/0 in /home/andre/.juju/data/andre-local 2012-05-11 08:50:42,648 Machine:0: unit.deploy DEBUG: Starting service unit mysql/0... 2012-05-11 08:50:42,649 Machine:0: unit.deploy DEBUG: Creating master container... 2012-05-11 08:54:33,992 Machine:0: unit.deploy DEBUG: Created master container andre-local-0-template 2012-05-11 08:54:33,993 Machine:0: unit.deploy INFO: Creating container mysql-0... 2012-05-11 08:56:18,760 Machine:0: unit.deploy INFO: Container created for mysql/0 2012-05-11 08:56:19,466 Machine:0: unit.deploy DEBUG: Charm extracted into container 2012-05-11 08:56:19,569 Machine:0: unit.deploy DEBUG: Starting container... 2012-05-11 08:56:22,707 Machine:0: unit.deploy INFO: Started container for mysql/0 2012-05-11 08:56:22,707 Machine:0: unit.deploy INFO: Started service unit mysql/0 2012-05-11 08:56:23,012 Machine:0: juju.agents.machine DEBUG: Units changed old:set(['mysql/0']) new:set(['wordpress/0', 'mysql/0']) 2012-05-11 08:56:23,039 Machine:0: juju.agents.machine DEBUG: Starting service unit: wordpress/0 ... 2012-05-11 08:56:23,154 Machine:0: unit.deploy DEBUG: Downloading charm cs:precise/wordpress-0 to /home/andre/.juju/data/andre-local/charms 2012-05-11 08:56:23,396 Machine:0: unit.deploy DEBUG: Using <juju.machine.unit.UnitContainerDeployment object at 0x9c519cc> for wordpress/0 in /home/andre/.juju/data/andre-local 2012-05-11 08:56:23,620 Machine:0: unit.deploy DEBUG: Starting service unit wordpress/0... 2012-05-11 08:56:23,621 Machine:0: unit.deploy INFO: Creating container wordpress-0... 2012-05-11 08:58:24,739 Machine:0: unit.deploy INFO: Container created for wordpress/0 2012-05-11 08:58:25,163 Machine:0: unit.deploy DEBUG: Charm extracted into container 2012-05-11 08:58:25,397 Machine:0: unit.deploy DEBUG: Starting container... 2012-05-11 08:58:27,982 Machine:0: unit.deploy INFO: Started container for wordpress/0 2012-05-11 08:58:27,983 Machine:0: unit.deploy INFO: Started service unit wordpress/0 This is the result for the status command (with verbose flag): ~$ juju -v status 2012-05-11 08:51:53,464 DEBUG Initializing juju status runtime 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.5 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@662: Client environment:host.name=andre-ufo 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@669: Client environment:os.name=Linux 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@670: Client environment:os.arch=3.2.0-24-generic-pae 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@671: Client environment:os.version=#37-Ubuntu SMP Wed Apr 25 10:47:59 UTC 2012 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@679: Client environment:user.name=andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@687: Client environment:user.home=/home/andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@699: Client environment:user.dir=/home/andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=192.168.122.1:41779 sessionTimeout=10000 watcher=0xb7780620 sessionId=0 sessionPasswd=<null> context=0x9242ee8 flags=0 2012-05-11 08:51:53,627:4030(0xb6b90b40):ZOO_INFO@check_events@1585: initiated connection to server [192.168.122.1:41779] 2012-05-11 08:51:53,649:4030(0xb6b90b40):ZOO_INFO@check_events@1632: session establishment complete on server [192.168.122.1:41779], sessionId=0x1373ae057d90007, negotiated timeout=10000 2012-05-11 08:51:53,651 DEBUG Environment is initialized. machines: 0: agent-state: running dns-name: localhost instance-id: local instance-state: running services: mysql: charm: cs:precise/mysql-1 relations: db: - wordpress units: mysql/0: agent-state: pending machine: 0 public-address: null wordpress: charm: cs:precise/wordpress-0 relations: db: - mysql units: wordpress/0: agent-state: pending machine: 0 public-address: null

    Read the article

  • Q&A: Oracle's Paul Needham on How to Defend Against Insider Attacks

    - by Troy Kitch
    Source: Database Insider Newsletter: The threat from insider attacks continues to grow. In fact, just since January 1, 2014, insider breaches have been reported by a major consumer bank, a major healthcare organization, and a range of state and local agencies, according to the Privacy Rights Clearinghouse.  We asked Paul Needham, Oracle senior director, product management, to shed light on the nature of these pernicious risks—and how organizations can best defend themselves against the threat from insider risks. Q. First, can you please define the term "insider" in this context? A. According to the CERT Insider Threat Center, a malicious insider is a current or former employee, contractor, or business partner who "has or had authorized access to an organization's network, system, or data and intentionally exceeded or misused that access in a manner that negatively affected the confidentiality, integrity, or availability of the organization's information or information systems."  Q. What has changed with regard to insider risks? A. We are actually seeing the risk of privileged insiders growing. In the latest Independent Oracle Users Group Data Security Survey, the number of organizations that had not taken steps to prevent privileged user access to sensitive information had grown from 37 percent to 42 percent. Additionally, 63 percent of respondents say that insider attacks represent a medium-to-high risk—higher than any other category except human error (by an insider, I might add). Q. What are the dangers of this type of risk? A. Insiders tend to have special insight and access into the kinds of data that are especially sensitive. Breaches can result in long-term legal issues and financial penalties. They can also damage an organization's brand in a way that directly impacts its bottom line. Finally, there is the potential loss of intellectual property, which can have serious long-term consequences because of the loss of market advantage.  Q. How can organizations protect themselves against abuse of privileged access? A. Every organization has privileged users and that will always be the case. The questions are how much access should those users have to application data stored in the database, and how can that default access be controlled? Oracle Database Vault (See image) was designed specifically for this purpose and helps protect application data against unauthorized access.  Oracle Database Vault can be used to block default privileged user access from inside the database, as well as increase security controls on the application itself. Attacks can and do come from inside the organization, and they are just as likely to come from outside as attempts to exploit a privileged account.  Using Oracle Database Vault protection, boundaries can be placed around database schemas, objects, and roles, preventing privileged account access from being exploited by hackers and insiders.  A new Oracle Database Vault capability called privilege analysis identifies privileges and roles used at runtime, which can then be audited or revoked by the security administrators to reduce the attack surface and increase the security of applications overall.  For a more comprehensive look at controlling data access and restricting privileged data in Oracle Database, download Needham's new e-book, Securing Oracle Database 12c: A Technical Primer. 

    Read the article

  • Is this simple XOR encrypted communication absolutely secure?

    - by user3123061
    Say Alice have 4GB USB flash memory and Peter also have 4GB USB flash memory. They once meet and save on both of memories two files named alice_to_peter.key (2GB) and peter_to_alice.key (2GB) which is randomly generated bits. Then they never meet again and communicate electronicaly. Alice also maintains variable called alice_pointer and Peter maintains variable called peter_pointer which is both initially set to zero. Then when Alice needs to send message to Peter they do: encrypted_message_to_peter[n] = message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] Where n i n-th byte of message. Then alice_pointer is attached at begining of the encrypted message and (alice_pointer + encrypted message) is sent to Peter and then alice_pointer is incremented by length of message (and for maximum security can be used part of key erased) Peter receives encrypted_message, reads alice_pointer stored at beginning of message and do this: message_to_peter[n] = encrypted_message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] And for maximum security after reading of message also erases used part of key. - EDIT: In fact this step with this simple algorithm (without integrity check and authentication) decreases security, see Paulo Ebermann post below. When Peter needs to send message to Alice they do analogical steps with peter_to_alice.key and with peter_pointer. With this trivial schema they can send for next 50 years each day 2GB / (50 * 365) = cca 115kB of encrypted data in both directions. If they need more data to send, they simple use larger memory for keys for example with today 2TB harddiscs (1TB keys) is possible to exchange next 50years 60MB/day ! (thats practicaly lots of data for example with using compression its more than hour of high quality voice communication) It Seems to me there is no way for attacker to read encrypted message without keys even if they have infinitely fast computer. because even with infinitely fast computer with brute force they get ever possible message that can fit to length of message, but this is astronomical amount of messages and attacker dont know which of them is actual message. I am right? Is this communication schema really absolutely secure? And if its secure, has this communication method its own name? (I mean XOR encryption is well-known, but whats name of this concrete practical application with use large memories at both communication sides for keys? I am humbly expecting that this application has been invented someone before me :-) ) Note: If its absolutely secure then its amazing because with today low cost large memories it is practicaly much cheeper way of secure communication than expensive quantum cryptography and with equivalent security! EDIT: I think it will be more and more practical in future with lower a lower cost of memories. It can solve secure communication forever. Today you have no certainty if someone succesfuly atack to existing ciphers one year later and make its often expensive implementations unsecure. In many cases before comunication exist step where communicating sides meets personaly, thats time to generate large keys. I think its perfect for military communication for example for communication with submarines which can have installed harddrive with large keys and military central can have harddrive for each submarine they have. It can be also practical in everyday life for example for control your bank account because when you create your account you meet with bank etc.

    Read the article

  • Would using a self-signed SSL certificate be appropriate in this scenario?

    - by Kevin Y
    Now I realize this topic has been discussed in a few questions before (specifically this one), but I'm still a little confused about the implications of using a self-signed certificate, and how I would be affected by doing so in this case. After reading various sources, I'm still a little confused about the exact details of using one. The biggest problem with a self-signed certificate, is a man-in-the-middle attack. Even if you are 100% sure that you are on the correct website and you completely trust the site (your email server for example), you could have someone intercept the connection and present you with their own self-signed certificate. You would think that you are using a secure connection with your email server but you are really using a secure connection to an attacker's email server. – SSL Shopper So somebody could switch out my self-signed certificate with their own, and I wouldn't be able to detect it? The way this site phrases it, it makes it sound worse to install a self-signed certificate than to leave your site without a certificate at all. Self-signed certificates cannot (by nature) be revoked, which may allow an attacker who has already gained access to monitor and inject data into a connection to spoof an identity if a private key has been compromised. CAs on the other hand have the ability to revoke a compromised certificate if alerted, which prevents its further use. - Wikipedia Does this mean that the only way someone could switch out their own certificate for mine is for them to find out the private key? I suppose this is more secure, but I'm still slightly confused about what exactly results from using a self-signed certificate. Is the only issue that obnoxious security warning that pops up in your browser when directed to the site, or is there more to it? Now in my case, I want to add the an SSL certificate to a minuscule Wordpress blog I run that I don't expect anyone else will read anytime soon; I mainly started it to get into the habit of blogging, and to learn more about the process of administrating a site (ex. what to do in situations like this one). Whenever I go to the login page and there's an HTTP:// instead of HTTPS://, I cringe a little. Submitting my password feels like I'm shouting my password out loud with hundreds of people listening. I don't plan on adding any other authors to the site, so I am the only person who would ever need to login. This isn't a site I'm trying to get page views from, or one that handles e-commerce or any sensitive info like that, simply my username and password to login with. One of the concerns (that I've gathered so far) of a self-signed certificate is that non-technical users might be scared by the security warning, but this would not be an issue in my case. TL;DR: If scaring visitors away isn't a concern (which it isn't in my case), is it acceptable to use a self-signed certificate for the purpose of encrypting my Wordpress blog's password, or are there added security issues I should be aware of? Essentially, I'm wondering whether adding a self-signed certificate will be safer than leaving my login page the way it is now, or if it adds the potential for more security breaches than leaving it sans-SSL.

    Read the article

  • LexisNexis and Oracle Join Forces to Prevent Fraud and Identity Abuse

    - by Tanu Sood
    Author: Mark Karlstrand About the Writer:Mark Karlstrand is a Senior Product Manager at Oracle focused on innovative security for enterprise web and mobile applications. Over the last sixteen years Mark has served as director in a number of tech startups before joining Oracle in 2007. Working with a team of talented architects and engineers Mark developed Oracle Adaptive Access Manager, a best of breed access security solution.The world’s top enterprise software company and the world leader in data driven solutions have teamed up to provide a new integrated security solution to prevent fraud and misuse of identities. LexisNexis Risk Solutions, a Gold level member of Oracle PartnerNetwork (OPN), today announced it has achieved Oracle Validated Integration of its Instant Authenticate product with Oracle Identity Management.Oracle provides the most complete Identity and Access Management platform. The only identity management provider to offer advanced capabilities including device fingerprinting, location intelligence, real-time risk analysis, context-aware authentication and authorization makes the Oracle offering unique in the industry. LexisNexis Risk Solutions provides the industry leading Instant Authenticate dynamic knowledge based authentication (KBA) service which offers customers a secure and cost effective means to authenticate new user or prove authentication for password resets, lockouts and such scenarios. Oracle and LexisNexis now offer an integrated solution that combines the power of the most advanced identity management platform and superior data driven user authentication to stop identity fraud in its tracks and, in turn, offer significant operational cost savings. The solution offers the ability to challenge users with dynamic knowledge based authentication based on the risk of an access request or transaction thereby offering an additional level to other authentication methods such as static challenge questions or one-time password when needed. For example, with Oracle Identity Management self-service, the forgotten password reset workflow utilizes advanced capabilities including device fingerprinting, location intelligence, risk analysis and one-time password (OTP) via short message service (SMS) to secure this sensitive flow. Even when a user has lost or misplaced his/her mobile phone and, therefore, cannot receive the SMS, the new integrated solution eliminates the need to contact the help desk. The Oracle Identity Management platform dynamically switches to use the LexisNexis Instant Authenticate service for authentication if the user is not able to authenticate via OTP. The advanced Oracle and LexisNexis integrated solution, thus, both improves user experience and saves money by avoiding unnecessary help desk calls. Oracle Identity and Access Management secures applications, Juniper SSL VPN and other web resources with a thoroughly modern layered and context-aware platform. Users don't gain access just because they happen to have a valid username and password. An enterprise utilizing the Oracle solution has the ability to predicate access based on the specific context of the current situation. The device, location, temporal data, and any number of other attributes are evaluated in real-time to determine the specific risk at that moment. If the risk is elevated a user can be challenged for additional authentication, refused access or allowed access with limited privileges. The LexisNexis Instant Authenticate dynamic KBA service plugs into the Oracle platform to provide an additional layer of security by validating a user's identity in high risk access or transactions. The large and varied pool of data the LexisNexis solution utilizes to quiz a user makes this challenge mechanism even more robust. This strong combination of Oracle and LexisNexis user authentication capabilities greatly mitigates the risk of exposing sensitive applications and services on the Internet which helps an enterprise grow their business with confidence.Resources:Press release: LexisNexis® Achieves Oracle Validated Integration with Oracle Identity Management Oracle Access Management (HTML)Oracle Adaptive Access Manager (pdf)

    Read the article

  • OTN ArchBeat Top 10 for September 2012

    - by Bob Rhubart
    The results are in... Listed below are the Top 10 most popular items shared via the OTN ArchBeat Facebook Page for the month of September 2012. The Real Architects of Los Angeles - OTN Architect Day - Oct 25 No gossip. No drama. No hair pulling. Just a full day of technical sessions and peer interaction focused on using Oracle technologies in today's cloud and SOA architectures. The event is free, but seating is limited, so register now. Thursday October 25, 2012. 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Oracle Fusion Middleware Security: Attaching OWSM policies to JRF-based web services clients "OWSM (Oracle Web Services Manager) is Oracle's recommended method for securing SOAP web services," says Oracle Fusion Middleware A-Team member Andre Correa. "It provides agents that encapsulate the necessary logic to interact with the underlying software stack on both service and client sides. Such agents have their behavior driven by policies. OWSM ships with a bunch of policies that are adequate to most common real world scenarios." His detailed post shows how to make it happen. Oracle 11gR2 RAC on Software Defined Network (SDN) (OpenvSwitch, Floodlight, Beacon) | Gilbert Stan "The SDN [software defined network] idea is to separate the control plane and the data plane in networking and to virtualize networking the same way we have virtualized servers," explains Gil Standen. "This is an idea whose time has come because VMs and vmotion have created all kinds of problems with how to tell networking equipment that a VM has moved and to preserve connectivity to VPN end points, preserve IP, etc." H/T to Oracle ACE Director Tim Hall for the recommendation. Process Oracle OER Events using a simple Web Service | Bob Webster Bob Webster's post "provides an example of a simple web service that processes Oracle Enterprise Repository (OER) Events. The service receives events from OER and utilizes the OER REX API to implement simple OER automations for selected event types." Understanding Oracle BI 11g Security vs Legacy Oracle BI 10g | Christian Screen "After conducting a large amount of Oracle BI 10g to Oracle BI 11g upgrades and after writing the Oracle BI 11g book,"says Oracle ACE Christian Screen, "I still continually get asked one of the most basic questions regarding security in Oracle BI 11g; How does it compare to Oracle BI 10g? The trail of questions typically goes on to what are the differences? And, how do we leverage our current Oracle BI 10g security table schema in Oracle BI 11g?" OIM-OAM-OAAM integration using TAP – Request Flow you must understand!! | Atul Kumar Atul Kumar's post addresses "key points and request flow that you must understand" when integrating three Oracle Identity Management product Oracle Identity Management, Oracle Access Management, and Oracle Adaptive Access Manager. Adding a runtime LOV for a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena illustrates how to customize the parameters tab for a taskflow in WebCenter. Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET | Scott Nelson "It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings," says Scott Nelson. "Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there." 15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. WebCenter Content (WCC) Trace Sections | ECM Architect ECM Architect Kevin Smith shares a detailed technical post covering WebCenter Content (WCC) Trace Sections. Thought for the Day "Eventually everything connects - people, ideas, objects. The quality of the connections is the key to quality per se." — Charles Eames (June 17, 1907 – August 21, 1978) Source: SoftwareQuotes.com

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >