Search Results

Search found 45806 results on 1833 pages for 'add apt repository'.

Page 284/1833 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • Git: Create a branch from unstagged/uncommited changes on master

    - by knoopx
    Context: I'm working on master adding a simple feature. After a few minutes I realize it was not so simple and it should have been better to work into a new branch. This always happens to me and I have no idea how to switch to another branch and take all these uncommited changes with me leaving the master branch clean. I supposed git stash && git stash branch new_branch would simply accomplish that but this is what I get: ~/test $ git status # On branch master nothing to commit (working directory clean) ~/test $ echo "hello!" > testing ~/test $ git status # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") ~/test $ git stash Saved working directory and index state WIP on master: 4402b8c testing HEAD is now at 4402b8c testing ~/test $ git status # On branch master nothing to commit (working directory clean) ~/test $ git stash branch new_branch Switched to a new branch 'new_branch' # On branch new_branch # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") Dropped refs/stash@{0} (db1b9a3391a82d86c9fdd26dab095ba9b820e35b) ~/test $ git s # On branch new_branch # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") ~/test $ git checkout master M testing Switched to branch 'master' ~/test $ git status # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") Do you know if there is any way of accomplishing this?

    Read the article

  • How to do binding programmically?

    - by user175908
    Hello, Could anyone identify the problem in this code? (I'm kinda newbie in WPF bindings.) This code executes after chart is loaded when I click a button: I get this error: Update: I dont get that error anymore. Thanks to Tomas. Now no error occur but chart looks completely blank (no columns) Update: Code now looks like this: // create a very simple DataSet var dataSet = new DataSet("MyDataSet"); var table = dataSet.Tables.Add("MyTable"); table.Columns.Add("Name"); table.Columns.Add("Price"); table.Rows.Add("Brick", 1.5d); table.Rows.Add("Soap", 4.99d); table.Rows.Add("Comic Book", 0.99d); // chart series var series = new ColumnSeries() { IndependentValueBinding = new Binding("[Name]"), // How to deal with DependentValueBinding = new Binding("[Price]"), // these two? ItemsSource = dataSet.Tables[0].DefaultView // or maybe I do mistake here? }; // ---------- set additional binding as adviced ------------------ series.SetBinding(ColumnSeries.ItemsSourceProperty, new Binding()); // chart stuff MyChart.Series.Add(series); MyChart.Title = "Names 'n Prices"; // some code to remove legend var style = new Style(typeof(Control)); style.Setters.Add(new Setter(LegendItem.TemplateProperty, null)); MyChart.LegendStyle = style; XAML: <Window x:Class="BindingzTest.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="606" Width="988" xmlns:charting="clr-namespace:System.Windows.Controls.DataVisualization.Charting;assembly=System.Windows.Controls.DataVisualization.Toolkit"> <Grid Name="LayoutRoot"> <charting:Chart Name="MyChart" Margin="0,0,573,0" Height="289" VerticalAlignment="Top" /> <Button Content="Button" Height="23" HorizontalAlignment="Left" Margin="272,361,0,0" Name="button1" VerticalAlignment="Top" Width="75" Click="chart1_Loaded" /> </Grid> Thanks for help in advance once more.

    Read the article

  • help understanding differences between #define, const and enum in C and C++ on assembly level.

    - by martin
    recently, i am looking into assembly codes for #define, const and enum: C codes(#define): 3 #define pi 3 4 int main(void) 5 { 6 int a,r=1; 7 a=2*pi*r; 8 return 0; 9 } assembly codes(for line 6 and 7 in c codes) generated by GCC: 6 mov $0x1, -0x4(%ebp) 7 mov -0x4(%ebp), %edx 7 mov %edx, %eax 7 add %eax, %eax 7 add %edx, %eax 7 add %eax, %eax 7 mov %eax, -0x8(%ebp) C codes(enum): 2 int main(void) 3 { 4 int a,r=1; 5 enum{pi=3}; 6 a=2*pi*r; 7 return 0; 8 } assembly codes(for line 4 and 6 in c codes) generated by GCC: 6 mov $0x1, -0x4(%ebp) 7 mov -0x4(%ebp), %edx 7 mov %edx, %eax 7 add %eax, %eax 7 add %edx, %eax 7 add %eax, %eax 7 mov %eax, -0x8(%ebp) C codes(const): 4 int main(void) 5 { 6 int a,r=1; 7 const int pi=3; 8 a=2*pi*r; 9 return 0; 10 } assembly codes(for line 7 and 8 in c codes) generated by GCC: 6 movl $0x3, -0x8(%ebp) 7 movl $0x3, -0x4(%ebp) 8 mov -0x4(%ebp), %eax 8 add %eax, %eax 8 imul -0x8(%ebp), %eax 8 mov %eax, 0xc(%ebp) i found that use #define and enum, the assembly codes are the same. The compiler use 3 add instructions to perform multiplication. However, when use const, imul instruction is used. Anyone knows the reason behind that?

    Read the article

  • using Object input\ output Streams with files and array list

    - by soad el-hayek
    hi every one .. i'm an it student , and it's time to finish my final project in java , i've faced too many problems , this one i couldn't solve it and i'm really ubset ! :S my code is like this : in Admin class : public ArrayList cos_info = new ArrayList(); public ArrayList cas_info = new ArrayList(); public int cos_count = 0 ; public int cas_count = 0 ; void coustmer_acount() throws FileNotFoundException, IOException{ String add=null; do{ person p = new person() ; cos_info.add(cos_count, p); cos_count ++ ; add =JOptionPane.showInputDialog("Do you want to add more coustmer..\n'y'foryes ..\n 'n'for No .."); } while(add.charAt(0) == 'Y'||add.charAt(0)=='y'); writenew_cos(); // add_acounts(); } void writenew_cos() throws IOException{ ObjectOutputStream aa = new ObjectOutputStream(new FileOutputStream("coustmer.txt")); aa.writeObject(cos_info); JOptionPane.showMessageDialog(null,"Added to file done sucessfuly.."); aa.close(); } in Coustmer class : void read_cos() throws IOException, ClassNotFoundException{ person p1= null ; int array_count = 0; ObjectInputStream d = new ObjectInputStream(new FileInputStream("coustmer.txt")); JOptionPane.showMessageDialog(null,d.available() ); for(int i = 0;d.available() == 0;i++){ a.add(array_count,(ArrayList) d.readObject()); array_count++; JOptionPane.showMessageDialog(null,"Haaaaai :D" ); JOptionPane.showMessageDialog(null,array_count ); } d.close(); JOptionPane.showMessageDialog(null,array_count +"1111" ); for(int i = 0 ; i it just print JOptionPane.showMessageDialog(null,d.available() ); and having excep. here a.add(array_count,(ArrayList) d.readObject()); p.s : person object from my own class and it's Serializabled

    Read the article

  • Why is my panel not positioned correctly even after setting the boundaries?

    - by nutellafella
    I'm trying to make a simple GUI with radio buttons and I grouped them into one panel. I wanted it positioned on the leftmost side so I used the setBounds method. Whatever numbers I put on the parameters, the panel won't move. Are panels not affected by the setBounds method? Or is there another way to position my panel. Here's the snippet of my code: JPanel radioPanel = new JPanel(); radioPanel.setLayout(new GridLayout(3,1)); JRadioButton Rbutton1 = new JRadioButton("Credit Card"); JRadioButton Rbutton2 = new JRadioButton("E-Funds"); JRadioButton Rbutton3 = new JRadioButton("Check"); Rbutton3.setSelected(true); ButtonGroup Bgroup = new ButtonGroup(); Bgroup.add(Rbutton1); Bgroup.add(Rbutton2); Bgroup.add(Rbutton3); radioPanel.add(Rbutton1); radioPanel.add(Rbutton2); radioPanel.add(Rbutton3); radioPanel.setBounds(10,50,50,40); //this is where I'm trying to position the panel with the radio buttons paymentPanel.add(radioPanel); contentPane.add(paymentPanel); //contentPane is the frame contentPane.setVisible(true);

    Read the article

  • Codeigniter - change url at method call

    - by NemoPS
    I was wondering if the following can be done in codeigniter. Let's assume I have a file, called Post.php, used to manage posts in an admin interface. It has several methods, such as index (lists all posts), add, update, delete... Now, I access the add method, so that the url becomes /posts/add And I add some data. I click "save" to add the new post. It calls the same method with an if statement like "if "this-input-post('addnew')"" is passed, call the model, add it to the database Here follows the problem: If everything worked fine, it goes to the index with the list of all posts, and displays a confirmation BUT No the url would still be posts/add, since I called the function like $this-index() after verifying data was added. I cannot redirect it to "posts/" since in that case no confirmation message would be shown! So my question is: can i call a method from anther one in the same class, and have the url set to that method (/posts/index instead of /posts/add)? It's kinda confusing, but i hope i gave you enough info to spot the problem Cheers!

    Read the article

  • Java queue and multi-dimention array question? [Beginner level]

    - by javaLearner.java
    First of all, this is my code (just started learning java): Queue<String> qe = new LinkedList<String>(); qe.add("b"); qe.add("a"); qe.add("c"); qe.add("d"); qe.add("e"); My question: Is it possible to add element to the queue with two values, like: qe.add("a","1"); // where 1 is integer So, that I know element "a" have value 1. If I want to add a number let say "2" to element a, I will have like a = 3. If this cant be done, what else in java classes that can handle this? I tried to use multi-dimention array, but its kinda hard to do the queue, like pop, push etc. (Maybe I am wrong) How to call specific element in the queue? Like, call element a, to check its value. [Note] Please don't give me links that ask me to read java docs. I was reading, and I still dont get it. The reason why I ask here is because, I know I can find the answer faster and easier.

    Read the article

  • How to call a generic method with an anonymous type involving generics?

    - by Alex Black
    I've got this code that works: def testTypeSpecialization = { class Foo[T] def add[T](obj: Foo[T]): Foo[T] = obj def addInt[X <% Foo[Int]](obj: X): X = { add(obj) obj } val foo = addInt(new Foo[Int] { def someMethod: String = "Hello world" }) assert(true) } But, I'd like to write it like this: def testTypeSpecialization = { class Foo[T] def add[X, T <% Foo[X](obj: T): T = obj val foo = add(new Foo[Int] { def someMethod: String = "Hello world" }) assert(true) } This second one fails to compile: no implicit argument matching parameter type (Foo[Int]{ ... }) = Foo[Nothing] was found. Basically: I'd like to create a new anonymous class/instance on the fly (e.g. new Foo[Int] { ... } ), and pass it into an "add" method which will add it to a list, and then return it The key thing here is that the variable from "val foo = " I'd like its type to be the anonymous class, not Foo[Int], since it adds methods (someMethod in this example) Any ideas? I think the 2nd one fails because the type Int is being erased. I can apparently 'hint' the compiler like this: def testTypeSpecialization = { class Foo[T] def add[X, T <% Foo[X]](dummy: X, obj: T): T = obj val foo = add(2, new Foo[Int] { def someMethod: String = "Hello world" }) assert(true) }

    Read the article

  • Is it possible to dynamically insert rows in an existing DataTable (No DataSource used)?

    - by aparnakarthik
    Hi... I have created a datatable with three fields namely TimeTask, TaskItem and Count (count of user) eg; {"12:30AM-01:00AM" , T1 , 3}. dataTable.Columns.Add("Task Time", typeof(string)); dataTable.Columns.Add("Task", typeof(string)); dataTable.Columns.Add("Count", typeof(int)); dataTable.Rows.Add("12:00AM-12:15AM", "T1", 6); dataTable.Rows.Add("12:45AM-01:00AM", "T1", 5); dataTable.Rows.Add("01:00AM-01:15AM", "T1", 1); dataTable.Rows.Add("01:15AM-01:30AM", "T2", 4); dataTable.Rows.Add("01:30AM-01:45AM", "T2", 9); GridView1.DataSource = dataTable; GridView1.DataBind(); In this there is no task for the TimeTask "12:15AM-12:30AM" and "12:30AM-12:45AM" yet the TimeTask should be inserted as, TimeTask TaskItem Count 12:00AM-12:15AM T1 6 12:15AM-12:30AM - - 12:30AM-12:45AM - - 12:45AM-01:00AM T1 5 01:00AM-01:15AM T1 1 01:15AM-01:30AM T2 4 01:30AM-01:45AM T2 9 How to chk for the missing rows? Is it possible to dynamically insert rows in an existing DataTable (No DataSource used) in this scenario.pls help.Thanks :-)

    Read the article

  • i18n - What are some naming-convention to use in creating language files?

    - by John Himmelman
    I'm developing a CMS that required i18n support. The translation strings are stored as an array in a language file (ie, en.php). Are there any naming conventions for this.. How can I improve on the sample language file below... // General 'general.title' => 'CMS - USA / English', 'general.save' => 'Save', 'general.choose_category' => 'Choose category', 'general.add' => 'Add', 'general.continue' => 'Continue', 'general.finish' => 'Finish', // Navigation 'nav.categories' => 'Categories', 'nav.products' => 'Products', 'nav.collections' => 'Collections', 'nav.styles' => 'Styles', 'nav.experts' => 'Experts', 'nav.shareyourstory' => 'Share Your Story', // Products 'cms.products' => 'Products', 'cms.add_product' => 'Add Product', // Categories 'cms.categories' => 'Categories', 'cms.add_category' => 'Add Category', // Collections 'cms.collections'=> 'Collections', 'cms.add_collections' => 'Add Collection', // Stylists 'cms.styles' => 'Stylists', 'cms.add_style' => 'Add Style', 'cms.add_a_style' => 'Add a style', // Share your story 'cms.share_your_story' => 'Share Your Story', // Styles 'cms.add_style' => 'Add Style',

    Read the article

  • Java queue and multi-dimension array

    - by javaLearner.java
    First of all, this is my code (just started learning java): Queue<String> qe = new LinkedList<String>(); qe.add("b"); qe.add("a"); qe.add("c"); qe.add("d"); qe.add("e"); My question: Is it possible to add element to the queue with two values, like: qe.add("a","1"); // where 1 is integer So, that I know element "a" have value 1. If I want to add a number let say "2" to element a, I will have like a = 3. If this cant be done, what else in java classes that can handle this? I tried to use multi-dimention array, but its kinda hard to do the queue, like pop, push etc. (Maybe I am wrong) How to call specific element in the queue? Like, call element a, to check its value. [Note] Please don't give me links that ask me to read java docs. I was reading, and I still dont get it. The reason why I ask here is because, I know I can find the answer faster and easier.

    Read the article

  • Layout question with BlackBerry IDE FieldManagers; how to emulate HTML's rowspan

    - by canadiancreed
    Hello all I'm trying to create a page where a list of items are displayed in a row where there are multiple columns on the left, but only one on the right, encased within a horizontialFieldManager. Currently I have the following code to attempt to do the following: VerticalFieldManager mainScreenManager = new VerticalFieldManager(); mainScreenManager.add(titleField); for (int i = 0; i < 10; i++) { HorizontalFieldManager itemAreaManager = new HorizontalFieldManager(); VerticalFieldManager itemTextFieldsAreaManager = new VerticalFieldManager(); itemTextFieldsAreaManager.add(new RichTextField(contentArticleTitle[i])); itemTextFieldsAreaManager.add(new RichTextField(contentArticleDate[i])); itemTextFieldsAreaManager.add(new SeparatorField()); itemAreaManager.add(itemTextFieldsAreaManager); itemAreaManager.add(new ButtonField("", 0)); mainScreenManager.add(itemAreaManager); }; add(mainScreenManager); Now the issue I'm experiencing is probably obvious to those familiar with managers; the horizontialFieldManager has the first item added to it consuming the entire width available, thereby never showing the button. What I'm wondering is how can I tell this in an extended class to only take up a certain percentage of the available width? I've tried subLayout and setting the width to be a certain amount, but it will just show the button instead of the text (pretty much same problem, just reversed)

    Read the article

  • Making A Dynaically Created Excel Report Downloadable

    - by Nick LaMarca
    I have 2 blocks of code, if someone could help me put them together I would get the functionality I am looking for. The first block of code downloads a gridview to excel using the download dialog I am looking for: Public Overloads Overrides Sub VerifyRenderingInServerForm(ByVal control As Control) ' Verifies that the control is rendered End Sub Private Sub ExportToExcel(ByVal filename As String, ByVal gv As GridView, ByVal numOfCol As Integer) Response.Clear() Response.Buffer = True Response.AddHeader("content-disposition", String.Format("attachment; filename={0}", filename)) Response.Charset = "" Response.ContentType = "application/vnd.ms-excel" Dim sw As New StringWriter() Dim hw As New HtmlTextWriter(sw) gv.AllowPaging = False gv.DataBind() 'Change the Header Row back to white color gv.HeaderRow.Style.Add("background-color", "#FFFFFF") For i As Integer = 0 To numOfCol - 1 gv.HeaderRow.Cells(i).Style.Add("background-color", "blue") gv.HeaderRow.Cells(i).Style.Add("color", "#FFFFFF") Next For i As Integer = 0 To gv.Rows.Count - 1 Dim row As GridViewRow = gv.Rows(i) 'Change Color back to white row.BackColor = System.Drawing.Color.White For j As Integer = 0 To numOfCol - 1 row.Cells(j).Style.Add("text-align", "center") Next 'Apply text style to each Row row.Attributes.Add("class", "textmode") 'Apply style to Individual Cells of Alternating Row If i Mod 2 <> 0 Then For j As Integer = 0 To numOfCol - 1 row.Cells(j).Style.Add("background-color", "#CCFFFF") row.Cells(j).Style.Add("text-align", "center") '#C2D69B 'row.Cells(j).Style.Add("font-size", "12pt") Next End If Next gv.RenderControl(hw) 'style to format numbers to string Dim style As String = "<style> .textmode { mso-number-format:\@; } </style>" Response.Write(style) Response.Output.Write(sw.ToString()) Response.Flush() Response.End() End Sub The second block of code is a sample report I am wish to be downloaded. So instead of downloading a gridview I want this function to accept a worksheet object.

    Read the article

  • Apache & SVN on Ubuntu - Post-commit hook fails silently, pre-commit hook “Permission Denied”

    - by 113169587962668775787
    I've been struggling for the past couple days to get post-commit email notifications working on my SVN server (running via HTTP with Apache2 on Ubuntu 9.10). SVN commits work fine, but for some reason the hooks are not being properly executed. Here are the configuration settings: - Users access the repo via HTTP with the apache dav_svn module (I created users/passwords via htpasswd in a dav_svn.passwd file). dav_svn.conf: <Location /svn/repos> DAV svn SVNPath /home/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> I created a post-commit hook file that writes a simple message to a file in the repository root: /home/svn/repos/hooks/post-commit: #!/bin/sh REPOS="$1" REV="$2" /bin/echo 'worked' > ${REPOS}/postcommit.log I set the entire repository to be owned by www-data (the apache user), and assigned 755 permissions to the post-commit script when I test the post-commit script using the www-data user in an empty environment, it works: sudo -u www-data env - /home/svn/repos/hooks/post-commit /home/svn/repos 7 But when I commit on a client machine, the commit is successful, but the post-commit script does not seem to be executed. I also tried running a simple script for the pre-commit hook, and I get an error, even with an empty pre-commit script: "Commit failed (details follow): Can't create null stdout for hook '/home/svn/repos/hooks/pre-commit': Permission denied" I did a few searches on Google for this error and I presume that this is an issue with the apache user (www-data) not having adequate permissions, specifically to execute /dev/null. I also read that the reason post-commit fails silently is because that it doesn't report with stdout. Anyway, I've also tried giving the apache user (www-data) ownership of the entire repository, and edited the apache virtualhost to allow operations on the server root, and I'm still getting permission denied /etc/apache2/sites-available/primarydomain.conf <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any ideas/suggestions would be greatly appreciated! Thanks

    Read the article

  • Traffic Shaping using tc

    - by Simon
    Hi guys, I have a 1.5 Mbit/s link that i want to share with 150 users. My setup is the following: Linux box with 3 NICs eth0 - public ip eth1 - subnet A - 50 users (static ips) eth2 - subnet B - 100 users (via dhcp) I am using squid as a transparent proxy on port 3128. dhcp server using ports 67 and 68. I was creating, but I think packets are not going to the right queues #!/bin/bash DEV=eth0 RATE_MAIN=2048kbit CEIL_MAIN=2048kbit BURST=1b CBURST=1b RATE_DEFAULT=1024kbit CEIL_DEFAULT=$CEIL_MAIN PRIO_DEFAULT=3 RATE_P2P=1024Kbit CEIL_P2P=$CEIL_MAIN PRIO_P2P=4 RATE_IND=32kbit CEIL_IND=$CEIL_DEFAULT tc qdisc del dev $DEV root tc qdisc add dev $DEV root handle 1: htb default 30 tc class add dev $DEV parent 1: classid 1:1 htb rate $RATE_MAIN ceil $CEIL_MAIN tc class add dev $DEV parent 1:1 classid 1:10 htb rate $RATE_DEFAULT ceil $CEIL_MAIN burst $BURST cburst $CBURST prio $PRIO_WEB ## some other sub class for p2p other traffic tc class add dev $DEV parent 1:1 classid 1:20 htb rate $RATE_P2P ceil $CEIL_P2P burst $BURST cburst $CBURST prio $PRIO_P2P $IPS_NET1=50 $IPS_NET2=100 let $IPS=$IPS_NET1+$IPS_NET2 for ((i=1; i<= $IPS; i++)) do let CLASSID=($i+100) let HANDLE=($i+100) tc class add dev $DEV parent 1:10 classid 1:$CLASSID htb rate $RATE_IND ceil $CEIL_IND tc qdisc add dev $DEV parent 1:$CLASSID handle $HANDLE: sfq perturb 10 done ## Generate IP addresses ## IP_ADDRESSES="" # Subnet A BASE_IP=10.10.10. for ((i=2; i<=$IPS_NET1+1; i++)) do TEMP="$BASE_IP$i" IP=ADDRESSES="$IP_ADDRESSES $TEMP" done # Subnet B BASE_IP=192.168.0. for ((i=2; i<=$IPS_NET2+1; i++)) do TEMP="$BASE_IP$i" IP_ADDRESSES="$IP_ADDRESSES $TEMP" done ## FILTERS ## j=1 U32="tc filter add dev $DEV protocol ip parent 1:0 prio $PRIO_DEFAULT u32" for NET in $IP_ADDRESSES; do let CLASSID=($j+100) $U32_DEFAULT match ip src $NET/32 flowid 1:$CLASSID $U32_DEFAULT match ip dst $NET/32 flowid 1:$CLASSID let j=j+1 done Can you guys help me figure out what's wrong with it? basically I want my classes to be 1:1 (1.5 Mbit ) 1:10 (1024 Kbit) 1:20 (1024 Kbit) (200 ips each with 32 kbit)

    Read the article

  • Virtualmin & git integration

    - by weby3456
    I've installed virtualmin on my VPS to manage my websites. It's working perfect and as expected nearly a year now. Recently I wanted to add some features to one of my sites, and I need git integration. I've correctly installed git & gitweb on my server, and I can create repositories and watch them under http://sub.domain.com/git/gitweb.cgi Here is the current relevant directory tree: /home/user/domains/sub.domain.com/public_html/git/ drwxr-sr-x user user . drwxr-x--- user user .. -rw-r--r-- user user git-favicon.png -rw-r--r-- user user git-logo.png -rwxr-xr-x user user gitweb.cgi -rw-r--r-- user user gitweb.css drwxrwx--- apache user reponame.git /home/user/domains/sub.domain.com/public_html/git/reponame.git/ drwxrwx--- apache user . drwxr-sr-x user user .. drwxrwx--- apache user branches -rwxrwx--- apache user config -rwxrwx--- user user description -rwxrwx--- apache user HEAD drwxrwx--- apache user hooks drwxrwx--- apache user info drwxrwx--- apache user objects drwxrwx--- apache user refs But I have some questions: When I'm visiting http://sub.domain.com/git/gitweb.cgi, the owner is listed as 'Apache'. why? how can I change that? Usually, to create a new git repository, I'll do something like: $ mkdir proj $ cd proj $ git init Initialized empty Git repository in /home/user/proj/.git/ // here I'm creating the files or copy them from somewhere else $ git add *.php $ git add README $ git commit -m 'initial version' But after creating the repository in virtualmin, I can find a new dir named 'reponame.git' but not the '.git' dir. When I'm trying to run any git command (e.g. git status) I'm receiving "fatal: This operation must be run in a work tree". How can I work with that repository? Currently I need to explicitly grant access for users to be able to view the repositories via gitweb. How can I make certain repositories public?

    Read the article

  • Using npm install as a MS-Windows system account

    - by Guss
    I have a node application running on Windows, which I want to be able to update automatically. When I run npm install -d as the Administrator account - it works fine, but when I try to run it through my automation software (that is running as local system), I get errors when I try to install a private module from a private git repository: npm ERR! git clone [email protected]:team/repository.git fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! Error: Command failed: fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! npm ERR! at ChildProcess.exithandler (child_process.js:637:15) npm ERR! at ChildProcess.EventEmitter.emit (events.js:98:17) npm ERR! at maybeClose (child_process.js:735:16) npm ERR! at Socket.<anonymous> (child_process.js:948:11) npm ERR! at Socket.EventEmitter.emit (events.js:95:17) npm ERR! at Pipe.close (net.js:451:12) npm ERR! If you need help, you may report this log at: npm ERR! <http://github.com/isaacs/npm/issues> npm ERR! or email it to: npm ERR! <[email protected]> npm ERR! System Windows_NT 6.1.7601 npm ERR! command "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "install" "-d" npm ERR! cwd D:\nodeapp npm ERR! node -v v0.10.8 npm ERR! npm -v 1.2.23 npm ERR! code 128 Just running git clone using the same system works fine. Any ideas?

    Read the article

  • Controlling clone access to multiple mercurial repos served via hgwebdir.cgi

    - by chrislawlor
    I'm trying to host multiple hg repositories to use for my clients. I need to control access to each repository individually - not just push access, but clone as well. I've got an .htaccess set which requires authentication globally: AuthUserFile /path/to/hgweb.passwd AuthGroupFile /dev/null AuthName "Chris Lawlor Client Mercurial Repositories" AuthType Basic <Limit GET POST PUT> Require valid-user </Limit> <FilesMatch "\.(htaccess|passwd|config|bak)$"> Order Allow,Deny Deny from all </FilesMatch> Then in each repository, I've got a .hg/hgrc file requiring a valid user [web] allow_push = <comma seperated user list> This almost does what I need. The problem is that I need to add ALL my clients to hgweb.passwd, which gives them clone access to ALL of the repositories. The only solution I can think of is to have another .htaccess and .passwd file in EACH repository. I don't really want to do that though, seems a little convoluted. I can already specify a list of authorized users for each repository in that repos' hgrc file with the allow_push setting. If only there were an allow_clone setting as well... All the documentation I've found for hgwebdir.cgi is incomplete. I've read: http://mercurial.selenic.com/wiki/HgWebDirStepByStep http://hgbook.red-bean.com/read/collaborating-with-other-people.html#sec:collab:cgi http://hgbook.red-bean.com/read/collaborating-with-other-people.html And others. I've yet to find a comprehensive list of hgrc settings. I guess this is as much an Apache question than a mercurial question. Unless I can find a better approach, I'll be going with a seperate .htaccess and .passwd file for each repo. This is a virtual host on Webfaction if it matters - set up roughly like this http://docs.webfaction.com/software/mercurial.html

    Read the article

  • RESOLVED Why does IPtables's NAT stop working when I enable the firewall's third interface?

    - by Kronick
    On my firewall I've three interfaces : eth0 : public IP (46.X.X.X.) eth0:0 public IP (46.X.X.Y.) eth1 : public IP (88.X.X.X.) eth2 : private LAN (172.X.X.X) I've setup a basic NAT which works great until I turn on the eth1 interface, I basically loose the connectivity. When I turn off the interface (ifconfig eth1 down) then the NAT re-work. I've added some policy routing via iproute, which makes my three public IP's available. I don't understand why turning on eth1 on makes the LAN unavailable. PS : weirder ; when I turn on eth1 BUT remove the NAT, then the firewall is accessible by using the public IPS. So to me it's exclusively a NAT issue, since without the NAT the network works while with the NAT without the second public interface, the NAT does work. Regards EDIT : I've been able to make it work by using iproute2 rules. That was definitely a routing issue. Here is what I did : ip rule add prio 50 table main ip rule add prio 201 from ip1/netmask table 201 ip rule add prio 202 from ip2/netmask table 202 ip route add default via gateway1 dev interface1 src ip1 proto static table 201 ip route append prohibit default table 201 metric 1 proto static ip route add default via gateway2 dev interface2 src ip2 proto static table 202 ip route append prohibit default table 202 metric 1 proto static # mutipath ip rule add prio 221 table 221 ip route add default table 221 proto static \ nexthop via gateway1 dev interface1 weight 2\ nexthop via gateway2 dev interface2 weight 3

    Read the article

  • Gitosis problems

    - by user49884
    I've spent the last 14 days on git and gitosis problems. I did always find a way around my problems but now I'm stuck. To briefly summarize the situation: I have setup gitosis, created a project and I can check in and out of it. Then I added another uses, giving him access to the project by adding him to gitosis.conf, but he can not even clone project. Then I added yet another user for the same project (following same procedure), he has access to everything (clone, pull and push). Finally, I added one more user who can not do anything either. I could live with all of this, because I have access to work on the project. Now I have added a new project, or have I? To my best believe, I have done everything the exact same way as with the first project. I do not get a repository in the repository folder on my server (when doing "git remote add..." and push). I have tried following ALL the guides google gave me on "how to create a new repository gitosis" (is up to page 7 before not ALL hits are marked as visited). I have also tried to follow a different path, starting with "git init --bare" on the server, and then try to clone it. Didn't work either. I get the following error no matter what I try: ERROR: gitosis.serve.main: Repository read access denied fatal: The remote than hung up unexpectedly (But it works fine for accessing gitosis-admin and my first project) Then I read about debugging of gitosis. I have tried with -v, --verbose and adding LogLevel = DEBUG in gitosis.conf, none of these give me extra information. Project setup gitosis.conf: [group project] writable = project members = me LogLevel = DEBUG To my best believe, everything is done the exact same way, as I did when setting up my first project. I'm really stuck, how do I proceed now?

    Read the article

  • Is there any equivalence of `--depth immediates` in `git`?

    - by ???
    Currently, I'm try to setup git front-end to the Subversion repository. My Subversion repository is a single large repository which consists of several co-related projects: svn-root |-- project1 | |-- branches | |-- tags | `-- trunk |-- project2 | |-- branches | |-- tags | `-- trunk `-- project3 |-- branches |-- tags `-- trunk Because it's sometimes needs to move files between different projects, so I don't want to break the repository to separate ones. I'm going to use git-svn to setup a git front-end, but I don't see how to exactly mapping the svn to git structure. The two systems treat branches and tags very different and I doubt it is possible. To simplify the problem, I would just git svn clone the whole root directory and let branches/tags/trunk directories just sit there. But this will definitely result in too many files in branches and tags directories. In Subversion, it's easy to just set the depth of checkout to immediates, which will only checkout the branch/tag titles, without the directory contents. but I don't know if this can be done in git. The git-svn messed me up. I hope there's more elegant solution.

    Read the article

  • OCR anything with OneNote 2007 and 2010

    - by Matthew Guay
    Quality OCR software can often be very expensive, but you may have one already installed on your computer that you didn’t know about.  Here’s how you can use OneNote to OCR anything on your computer. OneNote is one of the overlooked gems in recent versions of Microsoft Office.  OneNote makes it simple to take notes and keep track of everything with integrated search, and offers more features than its popular competitor Evernote.  One way it is better is its high quality optical character recognition (OCR) engine.  One of Evernote’s most popular features is that you can search for anything, including text in an image, and you can easily find it.  OneNote takes this further, and instantly OCRs any text in images you add.  Then, you can use this text easily and copy it from the image.  Let’s see how this works and how you can use OneNote as the ultimate OCR. Please Note: This feature is available in OneNote 2007 and 2010.  OneNote 2007 is included with Office 2007 Home and Student, Enterprise, and Ultimate, while OneNote 2010 is included with all edition of Office 2010 except for Starter edition. OCR anything First, let’s add something to OCR into OneNote.  There are many different ways you can add items to OCR into OneNote.  Open a blank page or one you want to insert something into, and then follow these steps to add what you want into OneNote. Picture Simply drag-and-drop a picture with text into a notebook… You can insert a picture directly from OneNote as well.  In OneNote 2010, select the Insert tab, and then choose Picture. In OneNote 2007, select the Insert menu, select Picture, and then choose From File.   Screen Clipping There are many times we’d like to copy text from something we see onscreen, but there is no direct way to copy text from that thing.  For instance, you cannot copy text from the title-bar of a window, or from a flash-based online presentation.  For these cases, the Screen Clipping option is very useful.  To add a clip of anything onscreen in OneNote 2010, select the Insert tab in the ribbon and click Screen Clipping. In OneNote 2007, either click the Clip button on the toolbar or select the Insert menu and choose Screen Clipping.   Alternately, you can take a screen clipping by pressing the windows key + S. When you click Screen Clipping, OneNote will minimize, your desktop will fade lighter, and your mouse pointer will change to a plus sign.  Now, click and drag over anything you want to add to OneNote.  Here we’re selecting the title of this article. The section you selected will now show up in your OneNote notebook, complete with the date and time the clip was made. Insert a file You’re not limited to pictures; OneNote can even OCR anything in most files on your computer.  You can add files directly in OneNote 2010 by selecting File Printout in the Insert tab. In OneNote 2007, select the Insert menu and choose Files as Printout. Choose the file you want to add to OneNote in the dialog. Select Insert, and OneNote will pause momentarily as it processes the file. Now your file will show up in OneNote as a printout with a link to the original file above it. You can also send any file directly to OneNote via the OneNote virtual printer.  If you have a file open, such as a PDF, that you’d like to OCR, simply open the print dialog in that program and select the “Send to OneNote” printer. Or, if you have a scanner, you can scan documents directly into OneNote by clicking Scanner Printout in the Insert tab in OneNote 2010. In OneNote 2003, to add a scanned document select the Insert menu, select Picture, and then choose From Scanner or Camera. OCR the image, file, or screenshot you put in OneNote Now that you’ve got your stuff into OneNote, let’s put it to work.  OneNote automatically did an OCR scan on anything you inserted into OneNote.  You can check to make sure by right-clicking on any picture, screenshot, or file you inserted.  Select “Make Text in Image Searchable” and then make sure the correct language is selected. Now, you can copy text from the Picture.  Simply right-click on the picture, and select “Copy Text from Picture”. And here’s the text that OneNote found in this picture: OCR anything with OneNote 2007 and 2010 - Windows Live Writer Not bad, huh?  Now you can paste the text from the picture into a document or anywhere you need to use the text. If you are instead copying text from a printout, it may give you the option to copy text from this page or all pages of the printout.   This works the exact same in OneNote 2007. In OneNote 2010, you can also edit the text OneNote has saved in the image from the OCR.  This way, if OneNote read something incorrectly you can change it so you can still find it when you use search in OneNote.  Additionally, you can copy only a specific portion of the text from the edit box, so it can be useful just for general copying as well.  To do this, right-click on the item and select “Edit Alt Text”. Here is the window to edit alternate text.  If you want to copy only a portion of the text, simply select it and press Ctrl+C to copy that portion. Searching OneNote’s OCR engine is very useful for finding specific pictures you have saved in OneNote.  Simply enter your search query in the search box on top right, and OneNote will automatically find all instances of that term in all of your notebooks.  Notice how it highlights the search term even in the image! This works the same in OneNote 2007.  Notice how it highlighted “How-to” in a shot of the header image in our favorite website. In Windows Vista and 7, you can even search for things OneNote OCRed from the Start Menu search.  Here the start menu search found the words “Windows Live Writer” in our OCR Test notebook in OneNote where we inserted the screen clip above. Conclusion OneNote is a very useful OCR tool, and can help you capture text from just about anything.  Plus, since you can easily search everything you have stored in OneNote, you can quickly find anything you insert anytime.  OneNote is one of the least-used Office tools, but we have found it very useful and hope you do too. Similar Articles Productive Geek Tips Add or Remove Apps from the Microsoft Office 2007 or 2010 SuiteRemove Office 2010 Beta and Reinstall Office 2007How To Create and Publish Blog Posts in Word 2010 & 2007How To Copy Worksheets in Excel 2007 & 2010Add Page Numbers to Documents in Word 2007 & 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers

    Read the article

  • Blend for Visual Studio 2013 Prototyping Applications with SketchFlow

    - by T
    Originally posted on: http://geekswithblogs.net/tburger/archive/2014/08/10/blend-for-visual-studio-2013-prototyping-applications-with-sketchflow.aspxSketchFlow enables rapid creating of dynamic interface mockups very quickly. The SketchFlow workspace is the same as the standard Blend workspace with the inclusion of three panels: the SketchFlow Feedback panel, the SketchFlow Animation panel and the SketchFlow Map panel. By using SketchFlow to prototype, you can get feedback early in the process. It helps to surface possible issues, lower development iterations, and increase stakeholder buy in. SketchFlow prototypes not only provide an initial look but also provide a way to add additional ideas and input and make sure the team is on track prior to investing in complete development. When you have completed the prototyping, you can discard the prototype and just use the lessons learned to design the application from or extract individual elements from your prototype and include them in the application. I don’t recommend trying to transition the entire project into a development project. Objects that you add with the SketchFlow style have a hand-sketched look. The sketch style is used to remind stakeholders that this is a prototype. This encourages them to focus on the flow and functionality without getting distracted by design details. The sketchflow assets are under sketchflow in the asset panel and are identifiable by the postfix “–Sketch”. For example “Button-Sketch”. You can mix sketch and standard controls in your interface, if required. Be creative, if there is a missing control or your interface has a different look and feel than the out of the box one, reuse other sketch controls to mimic the functionality or look and feel. Only use standard controls if it doesn’t distract from the idea that this is a prototype and not a standard application. The SketchFlow Map panel provides information about the structure of your application. To create a new screen in your prototype: Right-click the map surface and choose “Create a Connected Screen”. Name the screens with names that are meaningful to the stakeholders. The start screen is the one that has the green arrow. To change the start screen, right click on any other screen and set to start screen. Only one screen can be the start screen at a time. Rounded screen are component screens to mimic reusable custom controls that will be built into the final application. You can change the colors of all of the boxes and should use colors to create functional groupings. The groupings can be identified in the SketchFlow Project Settings. To add connections between screens in the SketchFlow Map panel. Move the mouse over a screen in the SketchFlow and a menu will appear at the bottom of the screen node. In the menu, click Connect to an existing screen. Drag the arrow to another screen on the Map. You add navigation to your prototype by adding connections on the SketchFlow map or by adding navigation directly to items on your interface. To add navigation from objects on the artboard, right click the item then from the menu, choose “Navigate to”. This will expose a sub-menu with available screens, backward, or forward. When the map has connected screens, the SketchFlow Player displays the connected screens on the Navigate sidebar. All screens show in the SketchFlow Player Map. To see the SketchFlow Player, run your SketchFlow prototype. The Navigation sidebar is meant to show the desired user work flow. The map can be used to view the different screens regardless of suggested navigation in the navigation bar. The map is able to be hidden and shown. As mentioned, a component screen is a shared screen that is used in more than one screen and generally represents what will be a custom object in the application. To create a component screen, you can create a screen, right click on it in the SketchFlow Map and choose “Make into component screen”. You can mouse over a screen and from the menu that appears underneath, choose create and insert component screen. To use an existing screen, select if from the Asset panel under SketchFlow, Components. You can use Storyboards and Visual State animations in your SketchFlow project. However, SketchFlow also offers its own animation technique that is simpler and better suited for prototyping. The SketchFlow Animation panel is above your artboard by default. In SketchFlow animation, you create frames and then position the elements on your interface for each frame. You then specify elapsed time and any effects you want to apply to the transition. The + at the top is what creates new frames. Once you have a new Frame, select it and change the property you want to animate. In the example above, I changed the Text of the result box. You can adjust the time between frames in the lower area between the frames. The easing and effects functions are changed in the center between each frame. You edit the hold time for frames by clicking the clock icon in the lower left and the hold time will appear on each frame and can be edited. The FluidLayout icon (also located in the lower left) will create smooth transitions. Next to the FluidLayout icon is the name of that Animation. You can rename the animation by clicking on it and editing the name. The down arrow chevrons next to the name allow you to view the list of all animations in this prototype and select them for editing. To add the animation to the interface object (such as a button to start the animation), select the PlaySketchFlowAnimationAction from the SketchFlow behaviors in the Assets menu and drag it to an object on your interface. With the PlaySketchFlowAnimationAction that you just added selected in the Objects and Timeline, edit the properties to change the EventName to the event you want and choose the SketchFlowAnimation you want from the drop down list. You may want to add additional information to your screens that isn’t really part of the prototype but is relevant information or a request for clarification or feedback from the reviewer. You do this with annotations or notes. Both appear on the user interface, however, annotations can be switched on or off at design and review time. Notes cannot be switched off. To add an Annotation, chose the Create Annotation from the Tools menu. The annotation appears on the UI where you will add the notes. To display or Hide annotations, click the annotation toggle at the bottom right on the artboard . After to toggle annotations on, the identifier of the person who created them appears on the artboard and you must click that to expand the notes. To add a note to the artboard, simply select the Note-Sketch from Assets ->SketchFlow ->Styles ->Sketch Styles. Drag and drop it to the artboard and place where you want it. When you are ready for users to review the prototype, you have a few options available. Click File -> Export and choose one of the options from the list: Publish to Sharepoint, Package SketchFlowProject, Export to Microsoft Word, or Export as Images. I suggest you play with as many of the options as you can to see what they do. Both the Sharepoint and Packaged SketchFlowProject allow you to collect feedback from one or more users that you can import into the project. The user can make notes on the UI and in the Feedback area in the bottom left corner of the player. When the user is done adding feedback, it is exported from the right most folder icon in the My Feedback panel. Feeback is imported on a panel named SketchFlow Feedback. To get that panel to show up, select Window -> SketchFlow Feedback. Once you have the panel showing, click the + in the upper right of the panel and find the notes you exported. When imported, they will show up in a list and on the artboard. To document your prototype, use the Export to Microsoft Word option from the File menu. That should get you started with prototyping.

    Read the article

  • WP7 Tips–Part I– Media File Coding Techniques to help pass the Windows Phone 7 Marketplace Certification Requirements

    - by seaniannuzzi
    Overview Developing an application that plays media files on a Windows Phone 7 Device seems fairly straight forward.  However, what can make this a bit frustrating are the necessary requirements in order to pass the WP7 marketplace requirements so that your application can be published.  If you are new to this development, be aware of these common challenges that are likely to be made.  Below are some techniques and recommendations on how optimize your application to handle playing MP3 and/or WMA files that needs to adhere to the marketplace requirements.   Windows Phone 7 Certification Requirements Windows Phone 7 Developers Blog   Some common challenges are: Not prompting the user if another media file is playing in the background before playing your media file Not allowing the user to control the volume Not allowing the user to mute the sound Not allowing the media to be interrupted by a phone call  To keep this as simple as possible I am only going to focus on what “not to do” and what “to do” in order to implement a simple media solution. Things you will need or may be useful to you before you begin: Visual Studio 2010 Visual Studio 2010 Feature Packs Windows Phone 7 Developer Tools Visual Studio 2010 Express for Windows Phone Windows Phone Emulator Resources Silverlight 4 Tools For Visual Studio XNA Game Studio 4.0 Microsoft Expression Blend for Windows Phone Note: Please keep in mind you do not need all of these downloaded and installed, it is just easier to have all that you need now rather than add them on later.   Objective Summary Create a Windows Phone 7 – Windows Media Sample Application.  The application will implement many of the required features in order to pass the WP7 marketplace certification requirements in order to publish an application to WP7’s marketplace. (Disclaimer: I am not trying to indicate that this application will always pass as the requirements may change or be updated)   Step 1: – Create a New Windows Phone 7 Project   Step 2: – Update the Title and Application Name of your WP7 Application For this example I changed: the Title to: “DOTNETNUZZI WP7 MEDIA SAMPLE - v1.00” and the Page Title to:  “media magic”. Note: I also updated the background.   Step 3: – XAML - Media Element Preparation and Best Practice Before we begin the next step I just wanted to point out a few things that you should not do as a best practice when developing an application for WP7 that is playing music.  Please keep in mind that these requirements are not the same if you are playing Sound Effects and are geared towards playing media in the background.   If you have coded this – be prepared to change it:   To avoid a failure from the market place remove all of your media source elements from your XAML or simply create them dynamically.  To keep this simple we will remove the source and set the AutoPlay property to false to ensure that there are no media elements are active when the application is started. Proper example of the media element with No Source:   Some Additional Settings - Add XAML Support for a Mute Button   Step 4: – Boolean to handle toggle of Mute Feature Step 5: – Add Event Handler for Main Page Load   Step 6: – Add Reference to the XNA Framework   Step 7: – Add two Using Statements to Resolve the Namespace of Media and the Application Bar using Microsoft.Xna.Framework.Media; using Microsoft.Phone.Shell;   Step 8: – Add the Method to Check the Media State as Shown Below   Step 9: – Add Code to Mute the Media File Step 10: – Add Code to Play the Media File //if the state of the media has been checked you are good to go. media_sample.Play(); Note: If we tried to perform this operation at this point you will receive the following error: System.InvalidOperationException was unhandled Message=FrameworkDispatcher.Update has not been called. Regular FrameworkDispatcher.Update calls are necessary for fire and forget sound effects and framework events to function correctly. See http://go.microsoft.com/fwlink/?LinkId=193853 for details. StackTrace:        at Microsoft.Xna.Framework.FrameworkDispatcher.AddNewPendingCall(ManagedCallType callType, UInt32 arg)        at Microsoft.Xna.Framework.UserAsyncDispatcher.HandleManagedCallback(ManagedCallType managedCallType, UInt32 managedCallArgs) at Microsoft.Xna.Framework.UserAsyncDispatcher.AsyncDispatcherThreadFunction()            It is not recommended that you just add the FrameworkDispatcher.Update(); call before playing the media file. It is recommended that you implement the following class to your solution and implement this class in the app.xaml.cs file.   Step 11: – Add FrameworkDispatcher Features I recommend creating a class named XNAAsyncDispatcher and adding the following code:   After you have added the code accordingly, you can now implement this into your app.xaml.cs file as highlighted below.   Note:  If you application sound file is not playing make sure you have the proper “Build Action” set such as Content.   Running the Sample Now that we have some of the foundation created you should be able to run the application successfully.  When the application launches your sound options should be set accordingly when the “checkMediaState” method is called.  As a result the application will properly setup the media options and/or alert the user accordinglyper the certification requirements.  In addition, the sample also shows a quick way to mute the sound in your application by simply removing the URI source of the media file.  If everything successfully compiled the application should look similar to below.                 <sound playing>   Summary At this point we have a fully functional application that provides techniques on how to avoid some common challenges when working with media files and developing applications for Windows Phone 7.  The techniques mentioned above should make things a little easier and helpful in getting your WP7 application approved and published on the Marketplace.  The next blog post will be titled: WP7 Tips–Part II - How to write code that will pass the Windows Phone 7 Marketplace Requirements for Themes (light and dark). If anyone has any questions or comments please comment on this blog. 

    Read the article

  • Overview of SOA Diagnostics in 11.1.1.6

    - by ShawnBailey
    What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections. In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. RDA (Remote Diagnostic Agent) RDA is a standalone tool that is used to collect both static configuration and dynamic runtime information from the SOA environment. RDA is generally run manually from the command line against a domain or single server. When opening a new Service Request, including an RDA collection can dramatically decrease the back and forth required to collect logs and configuration information for Support. After installing RDA you configure it to use the SOA Suite module as decribed in the referenced resources. The SOA module includes the Oracle WebLogic Server (WLS) module by default in order to include all of the relevant information for the environment. In addition to this basic configuration there is also an advanced mode where you can set the number of thread dumps for the collections, log files, Incidents, etc. When would you use it? When creating a Service Request or otherwise working with Oracle resources on an issue, capturing environment snapshots to baseline your configuration or to diagnose an issue on your own. How is it related to the other tools? RDA is related to DFW in that it collects the last 10 Incidents from the server by default. In a similar manner, RDA is related to ODL through its collection of the diagnostic logs and these may contain information from Selective Tracing sessions. Examples of what it currently collects: (for details please see the links in the Resources section) Diagnostic Logs (ODL) Diagnostic Framework Incidents (DFW) SOA MDS Deployment Descriptors SOA Repository Summary Statistics Thread Dumps Complete Domain Configuration RDA Resources: Webcast Recording: Using RDA with Oracle SOA Suite 11g Blog Post: Diagnose SOA Suite 11g Issues Using RDA Download RDA How to Collect Analysis Information Using RDA for Oracle SOA Suite 11g Products [ID 1350313.1] How to Collect Analysis Information Using RDA for Oracle SOA Suite and BPEL Process Manager 11g [ID 1352181.1] Getting Started With Remote Diagnostic Agent: Case Study - Oracle WebLogic Server (Video) [ID 1262157.1] top DFW (Diagnostic Framework) DFW provides the ability to collect specific information for a particular problem when that problem occurs. DFW is included with your SOA Suite installation and deployed to the domain. Let's define the components of DFW. Diagnostic Dumps: Specific diagnostic collections that are defined at either the 'system' or product level. Examples would be diagnostic logs or thread dumps. Incident: A collection of Diagnostic Dumps associated with a particular problem Log Conditions: An Oracle Diagnostic Logging event that DFW is configured to listen for. If the event is identified then an Incident will be created. WLDF Watch: The WebLogic Diagnostic Framework or 'WLDF' is not a component of DFW, however, it can be a source of DFW Incident creation through the use of a 'Watch'. WLDF Notification: A Notification is a component of WLDF and is the link between the Watch and DFW. You can configure multiple Notification types in WLDF and associate them with your Watches. 'FMWDFW-notification' is available to you out of the box to allow for DFW notification of Watch execution. Rule: Defines a WLDF Watch or Log Condition for which we want to associate a set of Diagnostic Dumps. When triggered the specified dumps will be collected and added to the Incident Rule Action: Defines the specific Diagnostic Dumps to collect for a particular rule ADR: Automatic Diagnostics Repository; Defined for every server in a domain. This is where Incidents are stored Now let's walk through a simple flow: Oracle Web Services error message OWS-04086 (SOAP Fault) is generated on managed server 1 DFW Log Condition for OWS-04086 evaluates to TRUE DFW creates a new Incident in the ADR for managed server 1 DFW executes the specified Diagnostic Dumps and adds the output to the Incident In this case we'll grab the diagnostic log and thread dump. We might also want to collect the WSDL binding information and SOA audit trail When would you use it? When you want to automatically collect Diagnostic Dumps at a particular time using a trigger or when you want to manually collect the information. In either case it can be readily uploaded to Oracle Support through the Service Request. How is it related to the other tools? DFW generates Incidents which are collections of Diagnostic Dumps. One of the system level Diagonstic Dumps collects the current server diagnostic log which is generated by ODL and can contain information from Selective Tracing sessions. Incidents are included in RDA collections by default and ADRCI is a tool that is used to package an Incident for upload to Oracle Support. In addition, both ODL and DMS can be used to trigger Incident creation through DFW. The conditions and rules for generating Incidents can become quite complicated and the below resources go into more detail. A simpler approach to leveraging at least the Diagnostic Dumps is through WLST (WebLogic Scripting Tool) where there are commands to do the following: Create an Incident Execute a single Diagnostic Dump Describe a Diagnostic Dump List the available Diagnostic Dumps The WLST option offers greater control in what is generated and when. It can be a great help when collecting information for Support. There are overlaps with RDA, however, DFW is geared towards collecting specific runtime information when an issue occurs while existing Incidents are collected by RDA. There are 3 WLDF Watches configured by default in a SOA Suite 11g domain: Stuck Threads, Unchecked Exception and Deadlock. These Watches are enabled by default and will generate Incidents in ADR. They are configured to reset automatically after 30 seconds so they have the potential to create multiple Incidents if these conditions are consistent. The Incidents generated by these Watches will only contain System level Diagnostic Dumps. These same System level Diagnostic Dumps will be included in any application scoped Incident as well. Starting in 11.1.1.6, SOA Suite is including its own set of application scoped Diagnostic Dumps that can be executed from WLST or through a WLDF Watch or Log Condition. These Diagnostic Dumps can be added to an Incident such as in the earlier example using the error code OWS-04086. soa.config: MDS configuration files and deployed-composites.xml soa.composite: All artifacts related to the deployed composite soa.wsdl: Summary of endpoints configured for the composite soa.edn: EDN configuration summary if applicable soa.db: Summary DB information for the SOA repository soa.env: Coherence cluster configuration summary soa.composite.trail: Partial audit trail information for the running composite The current release of RDA has the option to collect the soa.wsdl and soa.composite Diagnostic Dumps. More Diagnostic Dumps for SOA Suite products are planned for future releases along with enhancements to DFW itself. DFW Resources: Webcast Recording: SOA Diagnostics Sessions: Diagnostic Framework Diagnostic Framework Documentation DFW WLST Command Reference Documentation for SOA Diagnostic Dumps in 11.1.1.6 top Selective Tracing Selective Tracing is a facility available starting in version 11.1.1.4 that allows you to increase the logging level for specific loggers and for a specific context. What this means is that you have greater capability to collect needed diagnostic log information in a production environment with reduced overhead. For example, a Selective Tracing session can be executed that only increases the log level for one composite, only one logger, limited to one server in the cluster and for a preset period of time. In an environment where dozens of composites are deployed this can dramatically reduce the volume and overhead of the logging without sacrificing relevance. Selective Tracing can be administered either from Enterprise Manager or through WLST. WLST provides a bit more flexibility in terms of exactly where the tracing is run. When would you use it? When there is an issue in production or another environment that lends itself to filtering by an available context criteria and increasing the log level globally results in too much overhead or irrelevant information. The information is written to the server diagnostic log and is exportable from Enterprise Manager How is it related to the other tools? Selective Tracing output is written to the server diagnostic log. This log can be collected by a system level Diagnostic Dump using DFW or through a default RDA collection. Selective Tracing also heavily leverages ODL fields to determine what to trace and to tag information that is part of a particular tracing session. Available Context Criteria: Application Name Client Address Client Host Composite Name User Name Web Service Name Web Service Port Selective Tracing Resources: Webcast Recording: SOA Diagnostics Session: Using Selective Tracing to Diagnose SOA Suite Issues How to Use Selective Tracing for SOA [ID 1367174.1] Selective Tracing WLST Reference top DMS (Dynamic Monitoring Service) DMS exposes runtime information for monitoring. This information can be monitored in two ways: Through the DMS servlet As exposed MBeans The servlet is deployed by default and can be accessed through http://<host>:<port>/dms/Spy (use administrative credentials to access). The landing page of the servlet shows identical columns of what are known as Noun Types. If you select a Noun Type you will see a table in the right frame that shows the attributes (Sensors) for the Noun Type and the available instances. SOA Suite has several exposed Noun Types that are available for viewing through the Spy servlet. Screenshots of the Spy servlet are available in the Knowledge Base article How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS). Every Noun instance in the runtime is exposed as an MBean instance. As such they are generally available through an MBean browser and available for monitoring through WLDF. You can configure a WLDF Watch to monitor a particular attribute and fire a notification when the threshold is exceeded. A WLDF Watch can use the out of the box DFW notification type to notify DFW to create an Incident. When would you use it? When you want to monitor a metric or set of metrics either manually or through an automated system. When you want to trigger a WLDF Watch based on a metric exposed through DMS. How is it related to the other tools? DMS metrics can be monitored with WLDF Watches which can in turn notify DFW to create an Incident. DMS Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How to Reset a SOA 11g DMS Metric DMS Documentation top ODL (Oracle Diagnostic Logging) ODL is the primary facility for most Fusion Middleware applications to log what they are doing. Whenever you change a logging level through Enterprise Manager it is ultimately exposed through ODL and written to the server diagnostic log. A notable exception to this is WebLogic Server which uses its own log format / file. ODL logs entries in a consistent, structured way using predefined fields and name/value pairs. Here's an example of a SOA Suite entry: [2012-04-25T12:49:28.083-06:00] [AdminServer] [ERROR] [] [oracle.soa.bpel.engine] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: ] [ecid: 0963fdde7e77631c:-31a6431d:136eaa46cda:-8000-00000000000000b4,0] [errid: 41] [WEBSERVICE_PORT.name: BPELProcess2_pt] [APP: soa-infra] [composite_name: TestProject2] [J2EE_MODULE.name: fabric] [WEBSERVICE.name: bpelprocess1_client_ep] [J2EE_APP.name: soa-infra] Error occured while handling a post operation[[ When would you use it? You'll use ODL almost every time you want to identify and diagnose a problem in the environment. The entries are written to the server diagnostic log. How is it related to the other tools? The server diagnostic logs are collected by DFW and RDA. Selective Tracing writes its information to the diagnostic log as well. Additionally, DFW log conditions are triggered by ODL log events. ODL Resources: ODL Documentation top ADR (Automatic Diagnostics Repository) ADR is not a tool in and of itself but is where DFW stores the Incidents it creates. Every server in the domain has an ADR location which can be found under <SERVER_HOME>/adr. This is referred to the as the ADR 'Base' location. ADR also has what are known as 'Home' locations. Example: You have a domain called 'myDomain' and an associated managed server called 'myServer'. Your admin server is called 'AdminServer'. Your domain home directory is called 'myDomain' and it contains a 'servers' directory. The 'servers' directory contains a directory for the managed server called 'myServer' and here is where you'll find the 'adr' directory which is the ADR 'Base' location for myServer. To get to the ADR 'Home' locations we drill through a few levels: diag/ofm/myDomain/ In an 11.1.1.6 SOA Suite domain you will see 2 directories here, 'myServer' and 'soa-infra'. These are the ADR 'Home' locations. 'myServer' is the 'system' ADR home and contains system level Incidents. 'soa-infra' is the name that SOA Suite used to register with DFW and this ADR home contains SOA Suite related Incidents Each ADR home location contains a series of directories, one of which is called 'incident'. This is where your Incidents are stored. When would you use it? It's a good idea to check on these locations from time to time to see whether a lot of Incidents are being generated. They can be cleaned out by deleting the Incident directories or through the ADRCI tool. If you know that an Incident is of particular interest for an issue you're working with Oracle you can simply zip it up and provide it. How does it relate to the other tools? ADR is obviously very important for DFW since it's where the Incidents are stored. Incidents contain Diagnostic Dumps that may relate to diagnostic logs (ODL) and DMS metrics. The most recent 10 Incident directories are collected by RDA by default and ADRCI relies on the ADR locations to help manage the contents. top ADRCI (Automatic Diagnostics Repository Command Interpreter) ADRCI is a command line tool for packaging and managing Incidents. When would you use it? When purging Incidents from an ADR Home location or when you want to package an Incident along with an offline RDA collection for upload to Oracle Support. How does it relate to the other tools? ADRCI contains a tool called the Incident Packaging System or IPS. This is used to package an Incident for upload to Oracle Support through a Service Request. Starting in 11.1.1.6 IPS will attempt to collect an offline RDA collection and include it with the Incident package. This will only work if Perl is available on the path, otherwise it will give a warning and package only the Incident files. ADRCI Resources: How to Use the Incident Packaging System (IPS) in SOA 11g [ID 1381259.1] ADRCI Documentation top WLDF (WebLogic Diagnostic Framework) WLDF is functionality available in WebLogic Server since version 9. Starting with FMw 11g a link has been added between WLDF and the pre-existing DFW, the WLDF Watch Notification. Let's take a closer look at the flow: There is a need to monitor the performance of your SOA Suite message processing A WLDF Watch is created in the WLS console that will trigger if the average message processing time exceeds 2 seconds. This metric is monitored through a DMS MBean instance. The out of the box DFW Notification (the Notification is called FMWDFW-notification) is added to the Watch. Under the covers this notification is of type JMX. The Watch is triggered when the threshold is exceeded and fires the Notification. DFW has a listener that picks up the Notification and evaluates it according to its rules, etc When it comes to automatic Incident creation, WLDF is a key component with capabilities that will grow over time. When would you use it? When you want to monitor the WLS server log or an MBean metric for some condition and fire a notification when the Watch is triggered. How does it relate to the other tools? WLDF is used to automatically trigger Incident creation through DFW using the DFW Notification. WLDF Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How To Script the Creation of a SOA WLDF Watch in 11g [ID 1377986.1] WLDF Documentation top

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >