- 快召唤伙伴们来围观吧
- 微博 QQ QQ空间 贴吧
- 文档嵌入链接
- 复制
- 微信扫一扫分享
- 已成功复制到剪贴板
FlashExtract: A Framework for Data Extraction by Examples
展开查看详情
1 . FlashExtract: A Framework for Data Extraction by Examples Vu Le ∗ Sumit Gulwani University of California at Davis Microsoft Research Redmond vmle@ucdavis.edu sumitg@microsoft.com Abstract This has led to development of various domain-specific technologies Various document types that combine model and view (e.g., text for data extraction. Scripting languages like Perl, Awk, Python have files, webpages, spreadsheets) make it easy to organize (possibly been designed to support string processing in text files. Spreadsheet hierarchical) data, but make it difficult to extract raw data for any systems like Microsoft Excel allow users to write macros using a further manipulation or querying. We present a general framework rich built-in library of string and numerical functions, or to write FlashExtract to extract relevant data from semi-structured docu- arbitrary scripts in Visual Basic/.NET programming languages. Web ments using examples. It includes: (a) an interaction model that technologies like Xquery, HTQL, XSLT can be used to extract data allows end-users to give examples to extract various fields and to re- from webpages, but this has the additional burden of knowing the late them in a hierarchical organization using structure and sequence underlying document structure. constructs. (b) an inductive synthesis algorithm to synthesize the Existing programmatic solutions to data extraction have three intended program from few examples in any underlying domain- key limitations. First, the solutions are domain-specific and require specific language for data extraction that has been built using our knowledge/expertise in different technologies for different document specified algebra of few core operators (map, filter, merge, and pair). types. Second, they require understanding of the entire underlying We describe instantiation of our framework to three different do- document structure including the data fields that the end user is mains: text files, webpages, and spreadsheets. On our benchmark not interested in extracting and their organization (some of which comprising 75 documents, FlashExtract is able to extract intended may not even be visible in the presentation layer as in case of data using an average of 2.36 examples in 0.84 seconds per field. webpages). Third, and most significantly, they require knowledge of programming. The first two aspects create challenges for even Categories and Subject Descriptors D.1.2 [Programming Tech- programmers, while the third aspect puts these solutions out of reach niques]: Automatic Programming; I.2.2 [Artificial Intelligence]: of the vast majority of business end users who lack programming Program Synthesis skills. As a result, users are either unable to leverage access to rich data or have to resort to manual copy-paste, which is both time- General Terms Languages, Algorithms, Human Factors consuming and error prone. In this paper, we address the problem of developing a uniform Keywords Program Synthesis, End-user Programming, Program- end-user friendly interface to support data extraction from semi- ming by Examples structured documents of various types. Our methodology includes two key novel aspects: a uniform user interaction model across different document types, and a generic inductive program synthesis 1. Introduction framework. The IT revolution over the past few decades has resulted in two significant advances: the digitization of massive amounts of data Uniform and End-user Friendly Interaction Model Our extrac- and widespread access to computational devices. However, there is tion interface supports data extraction via examples. The user ini- a wide gap between access to rich digital information and the ability tiates the process by providing a nested hierarchical definition of to manipulate and analyze it. the data that he/she wants to extract using standard structure and Information is available in documents of various types such as sequence constructs. The user then provides examples of the various text/log files, spreadsheets, and webpages. These documents offer data fields and their relationships with each other. An interesting their creators great flexibility in storing and organizing hierarchical aspect is that this model is independent of the underlying docu- data by combining presentation/formatting with the underlying data ment type. This is based on our observation that different document model. However, this makes it extremely hard to extract the underly- types share one thing in common: a two-dimensional presentation ing data for several tasks such as data processing, querying, altering layer. We allow users to provide examples by highlighting two- the presentation view, or transforming data to another storage format. dimensional regions on these documents. These regions indicate either the fields that the user wants to extract or structure/record ∗ Work boundaries around related fields. done during two internships at Microsoft Research. Inductive Program Synthesis Framework To enable data extrac- Permission to make digital or hard copies of all or part of this work for personal or tion from examples, we leverage inductive program synthesizers classroom use is granted without fee provided that copies are not made or distributed that can synthesize scripts from examples in an underlying domain- for profit or commercial advantage and that copies bear this notice and the full citation specific language (DSL). The key technical contribution of this on the first page. Copyrights for components of this work owned by others than ACM paper is an inductive program synthesis framework that allows easy must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a development of inductive synthesizers from mere definition of DSLs fee. Request permissions from Permissions@acm.org. (for data extraction from various document types). We describe an PLDI ’14, June 09 - 11 2014, Edinburgh, United Kingdom. expressive algebra containing four core operators: map, filter, merge, Copyright c 2014 ACM ACM 978-1-4503-2784-8/14/06. . . $15.00. and pair. For each of these operators, we define its sound and com- http://dx.doi.org/10.1145/2594291.2594333 plete generic inductive synthesis algorithms parameterized by the
2 . Figure 2: Extracting data from a Google Scholar webpage. fields into an Excel spreadsheet in order to perform some analysis. Accomplishing this task by creating a one-off Perl script appears daunting, especially for a non-programmer. Suppose the user only wants to extract the analyte names (magenta regions starting with instance “Be”) and their mass (violet Figure 1: Extracting data from a text file using FlashExtract. regions starting with instance “9”). The user starts with the analyte names. She highlights the first two regions “Be” and “Sc” as operator’s arguments. As a result, the synthesis designer simply examples in magenta color. FlashExtract synthesizes an extraction needs to define a DSL with two features: (a) It should be expressive program and uses it to highlight all other analyte instances. The user enough to provide appropriate abstractions for data extraction for the inspects and approves the highlighted result because it matches the underlying document type, (b) It should be built out of the operators intended sequence. She then moves to the mass field and repeats the provided by our core algebra. The synthesis algorithm is provided process with violet color. FlashExtract can now automatically relate for free by our framework. This is a significant advance in the area the respective instances from the magenta sequence and the violet of programming by examples, wherein current literature [11, 12] is sequence, and can generate a two-column Excel table if desired. limited to domain-specific synthesizers. Now suppose the user also wants to extract the conc. mean This paper makes the following contributions: (blue regions including instance “0.070073”). After one example, • We present a uniform and end-user friendly interaction model FlashExtract mistakenly includes the string “,""ug/L„,404615.043” for data extraction from examples (§3). This eliminates the to the result (this should be null). To exclude this region, the user need to learn domain-specific scripting technologies for various draws a red line through it to mark it as a negative example, and document types. It also eliminates the need to understand the FlashExtract refines the learning with the new information. It then document’s internal data model and its representation. produces the correct result and the user stops providing any further • We present a rich algebra of operators for data extraction DSLs examples. Although the third sequence contains fewer regions, and a modular inductive synthesis strategy (which is sound, FlashExtract is still able to relate it to the other two automatically complete, and practically efficient) for each of these operators because it is the only sequence containing null regions. (§4). This allows development of inductive synthesizers for data In case FlashExtract cannot relate different field regions, or does extraction from various document types from a mere definition so incorrectly, the user can intervene by marking a structure bound- of an appropriate DSL using these operators. This eliminates the ary around related regions. For instance, the user may highlight the need to develop specialized program synthesis algorithms. first yellow region as the structure boundary for the intensity of the • We present three useful instantiations of our framework to the first analyte. FlashExtract is able to infer similar yellow regions domains of text files, webpages, and spreadsheets (§5). Each that group other intensities. If the user wants to further organize the of these improves the state of the art for data extraction from analyte intensities into different samples, she creates the outer green respective document types. regions. The user can then add the sample ID (orange field, such as • We present detailed experimental evaluation that illustrates the “5007-01”) to these green structures. effectiveness of the three instantiations of our general framework Once the highlighting process has been completed, the user can on 25 documents each (§6). Each of these instantiations were obtain the data (in different formats such as XML file or Excel table) able to synthesize the desired extraction script using an average and its associated data extraction program. The user may run the of 2.36 examples in 0.84 seconds per field. program on other similar files to extract data in the same output format without giving any additional examples. We start out with motivating examples for data extraction from Note that the user can extract data in any field order (we only various document types and illustrate our user interaction model. demonstrated one such order). For example, the green regions can be highlighted before the yellow regions, which in turn can 2. Motivating Examples be highlighted before the violet regions. The top-down order is generally recommended and has higher chance of success (because In this section, we motivate some data extraction tasks across an inner field knows who is its parent). Furthermore, the highlighting different document types and illustrate how FlashExtract can be does not necessarily need to follow the actual file structure; it just used to automate the various tasks from examples. needs to be consistent. For instance, the user may want the green structures to begin at “"Sample ID”, or the yellow structures to end Text Extraction in the middle of the lines, say before “ug/L”. E XAMPLE 1. Consider the text file in Fig. 1 (taken from a help fo- We can combine FlashExtract with existing end-user program- rum thread1 ) that contains a sequence of sample readings, where ming technologies to create new user experiences. For instance, each sample reading lists various “analytes” and their characteris- integration of FlashExtract with FlashFill [10] allows users to both tics (analyte intensities). The user wants to extract the highlighted extract and transform the highlighted fields using examples, and possibly push the changes back to the original document. As an 1 http://www.excelforum.com/excel-programming/608284-read-txt-file.html example, after highlighting using FlashExtract, the user can easily
3 . Schema M = S | T Structure T = Struct (identif ier : E1 , . . . , identif ier : En ) Element E = f | S Sequence S = Seq(f ) Field f = [color] τ | [color] T Figure 4: The language of schema for extracted data. of having to write a complicated conditional arithmetic macro over the original semi-structured spreadsheet view). Task (b) is easily accomplished using the popular “Recommended Charts” feature in Excel 2013, wherein Excel automatically suggests relevant charts over user’s data. Note that this feature only works when the data is properly formatted as a single relational table—it does not work on the original semi-structured spreadsheet. Figure 3: Extracting data from a semi-structured spreadsheet. If the user later decides to also extract the investigator name (blue), she can simply provide an example. As before, once all change the precision of conc. mean (blue field) or the casing of interactions are recorded, the output view can be automatically analytes (magenta field) using FlashFill. updated if the user continues to edit and maintain the original ad- hoc format consistently. Webpage Extraction E XAMPLE 2. Google Scholar website (http:// scholar.google.com) has 3. User Interaction Model author pages containing list of all publications. A publication Our user interaction model for data extraction requires the user to consists of its title, list of authors, venue, number of citations, provide an output data schema and highlight examples of regions and year of publication. Fig. 2 shows an excerpt from a page of a (in the document) that contain the desired information. The final researcher Mandana Vaziri. result is a schema extraction program that extracts the desired data from the document as an instance of the output schema. We next Suppose the user wants to find all publication titles in which define these aspects in more detail. Dr. Vaziri is the first author. She can use FlashExtract to extract the publication title (blue) and just the first author (magenta) fields into Output Schema an Excel spreadsheet, where she can utilize the native spreadsheet The final product of an extraction task is a nested organization of functionality to sort by first author field. A key design principle the extracted data using standard structure and sequence constructs. behind FlashExtract is to provide a uniform interaction model Fig. 4 defines the language for the output schema. The output schema independent of the underlying document type. The user interacts is either a sequence S over some field f , or a structure T with named with webpages exactly as with text files. That is, the user gives elements E1 , . . . , En . Each element E corresponds to either a field examples for the desired fields by using colored highlighting over the f or to a sequence S. Each field f is either an atomic type τ (also rendered webpage. She does not need to understand the underlying referred to as a leaf field) or a structure T . Each field f is associated structure of the webpages (as is required in the case of querying with a unique color (denoted f.Color). languages such as XPath, XQuery). For example, the schemas for the two extraction tasks discussed Now suppose the user wants to extract publication titles along in Ex. 1 are presented below. The first one represents a sequence of with the list of all authors. Extracting list of all authors is technically yellow structures, each of which contains a magenta Analyte field challenging because the comma separated list of all authors is and a violet Mass field. The second one represents organization of represented as a string in a single div tag. However, FlashExtract many more fields referenced in the task illustrated in Fig. 1. can be used to highlight each author individually. Its underlying (1) Seq [yellow] Struct(Analyte : [magenta] Float, DSL is expressive enough to allow extracting list of regions in a text Mass : [violet] Int) field of a single HTML node. (2) Seq [green] Struct(SampleID : [orange] String, FlashExtract can be used to group a publication and all its authors Intensities : Seq([yellow] Struct( together. The user may make this relation explicit by highlighting Analyte : [magenta] String, green regions. The same applies to the yellow regions that group Mass : [violet] Int, author sequences. Once FlashExtract has produced the program, the CMean : [blue] Float))) user may run it on other scholar pages to perform the same task for other authors. Note that the schema language does not allow a sequence to be directly nested inside another sequence. It requires a colored Spreadsheet Extraction structure construct in between them. The structure serves as the boundary for the learning of the inner sequence. E XAMPLE 3. Consider the semi-structured spreadsheet "Funded - February#A835C.xlsx" from the EUSES benchmark [15] shown D EFINITION 1 (Ancestor). We say that a field f1 is an ancestor of in Fig. 3. Suppose the user wants to (a) add up the values in the field f2 if f2 is nested inside f1 . For notational convenience, we say Amount column (excluding the values in the subtotal rows), and (b) that ⊥, which stands for the top-level data schema definition, is an plot values in the Amount column grouped by department name. ancestor of every field. Additionally, f1 is a sequence-ancestor of f2 if there is at least one sequence construct in the nesting between f1 The user highlights few examples of the amount field (ma- and f2 . Otherwise, f1 is a structure-ancestor of f2 . We say that ⊥ is genta), department field (yellow), and the record boundaries (green). an ancestor of every other field. FlashExtract is then able to highlight all other similar instances and creates a new relational table view. For task (a), the user can now In the second schema above, the yellow structure is the structure- simply add up all the values in the amount column by using the na- ancestor of leaf fields Analyte, Mass, and CMean. The top-level tive spreadsheet SUM function over the new relational view (instead green structure is the sequence-ancestor of the yellow structure.
4 . function Run (schema extraction program Q, schema M , document Fill Struct (id1 E1 , . . . , idn En ), R = new D) : schema instance is 1 CR := ∅ Struct {id1 = Fill(E1 , R), . . . , idn = Fill(En , R)} 2 foreach field f in M in top-down topological order do Fill Seq(f ), R = new 3 R˜ := Run(Q(f ), D, CR) ˜ Seq Map(λR : Fill(f, R ), Subregions(R, CR[f.Color]) 4 CR := CR ∪ {(f.Color, R) | R ∈ R} Fill([color] Val, R) = Subregion(R, CR[color]).Val 5 if CR is inconsistent with M then return ⊥ 6 else return Fill(M, D.Region) Fill([color] T, R) = Fill(T, Subregion(R, CR[color])) function Run (extraction program (f , P ) of field f , document D, Fill(_, ⊥) = ⊥ highlighting CR): f-regions is 7 R˜ := (f = ⊥)? {D.Region} : CR[f .Color] Figure 5: Semantics of Fill. 8 return P R /* execute P on R */ ˜ R ∈R D EFINITION 4 (Extraction Programs). A schema extraction pro- gram Q for a given schema M is represented as a map from each Algorithm 1: Execution semantics of extraction programs. field f in M to a field extraction program, denoted Q(f ). A field extraction program of field f is a pair (f , P ), where f We categorize ancestral relationship in order to invoke appropriate (possibly ⊥) is some ancestor field of f and P is either a SeqRegion synthesis algorithms. program that extracts a sequence of f -regions from inside a given f -region (in case f is a sequence-ancestor of f ), or is a Region Regions program that extracts a single f -region from inside a given f -region D EFINITION 2 (Region). A region R of a document is some two- (in case f is a struct-ancestor of f ). dimensional portion over the visualization layer of that document that the user is allowed to highlight in some color. We use the The execution semantics of a schema extraction program is notation f -region to denote any region that the user highlights in defined in Algorithm 1. FlashExtract executes the field extraction f.Color. Any region R that is associated with a leaf field (also program corresponding to each field f in a top-down order and referred to as a leaf region) has some value associated with it, updates the document highlighting CR using the returned list of f - which is denoted by R.Val. For a document D, we use the notation regions R˜ (lines 2–4). For each field f , it first finds the set of regions D.Region to denote the largest region possible in D. ˜ R determined by the ancestor f (line 7), and then computes all f -regions by executing the field extraction program on each region In case of text files, any region is represented by a pair of two ˜ (line 8). Once CR has been fully constructed, it generates a character positions within the file and consists of all characters in R in R between (these positions may or may not be within the same line). schema instance from the nesting relationship defined in the output The value of such a region is the string of all characters in between schema M , using the Fill function (line 6). those positions. Fig. 5 defines the semantics of Fill recursively. Each definition In case of webpages, a leaf region is represented by either an takes a schema construct and a region corresponding to one of HTML node (the value of such a region is the text value associated its ancestor fields, and returns a construct instance by recursively with that node) or a pair of character positions within the text applying Fill functions on its descendants. CR[c] returns all regions whose color is c. Subregions(R, R) ˜ returns the ordered set of content of an HTML node (the value of such a region is the string of all characters in between those positions). A non-leaf region is regions from R ˜ that are nested inside R. Subregion(R, R) ˜ returns represented by an HTML node. ˜ the region from R that is nested inside R; if no such region exists, In case of spreadsheets, a leaf region is represented by a single ⊥ is returned. Note that if CR is consistent with M , there is at most cell (and its value is the cell’s content), while a non-leaf region is one such region. We assume the presence of an API for checking represented by a pair of cells (and consists of the rectangular region the nestedness of two regions. determined by those cells). Example-based User Interaction D EFINITION 3 (Highlighting). A highlighting CR of a document D is a collection of colored regions in D. It can also be viewed as a Having defined all necessary concepts, we are now ready to discuss function that maps a color to all regions of that color in D. We say the way a user interacts with FlashExtract to extract their desired that a highlighting CR is consistent with a data scheme M if the data. The user first supplies the output data schema. Then, for each following conditions hold. field f in the schema (in an order determined by the user), the user simply provides sufficient number of examples of field instances of • For any two regions (in CR), either they don’t overlap or one is field f by highlighting appropriate regions in the document using nested inside the other. f.Color. Our user interface supports standard mouse click, drag, • For any two fields f1 and f2 in M such that f1 is an ancestor of and release gestures. f2 , each f2 -region R2 is nested inside some f1 -region R1 . When the user provides examples for a field f , FlashExtract • For any two fields f1 and f2 in M such that f1 is a struct- synthesizes a field extraction program for field f (using Algorithm 2) ancestor of f2 , there is at most one f2 -region inside a f1 -region. that is consistent with the provided examples, and executes it to • For every leaf field f in M , the value of any f -region in CR is of identify and highlight other regions in f.Color. (See Def. 5 for a type f . formal notion of consistency.) If the user is happy with the inferred highlighting, she can commit the results (the field f is said to Schema Extraction Program have been materialized at that point of time), and then proceed FlashExtract synthesizes extraction programs for individual fields to another (non-materialized) field. Otherwise, the user may provide and combines them into a schema extraction program following the any additional examples. structure imposed by the output schema. FlashExtract also leverages We say that a field f has been simplified if there exists a the schema’s structure to simplify the learning of individual fields. In materialized field f such that f is a structure-ancestor of f . The particular, it relates a field f to one of its ancestors, whose extraction examples for a non-simplified field consist of positive instances program (in case of a non-⊥ ancestor) defines learning boundaries and optionally negative instances of regions that lie completely for f (i.e., each f -region must reside inside one of these boundaries). within the regions of the immediate ancestor field that has been
5 . function SynthesizeFieldExtractionProg (Document D, advantage. It allows FlashExtract to guess the organization of leaf Schema M , Highlighting CR, Field f , Regions R ˜ 1 , Regions R ˜ 2 ) is field instances by looking at their relative order (thereby obviating /* R ˜1 , R ˜ 2 denote positive, negative instances */ the need to provide examples for any non-leaf field.) While this 1 foreach ancestor field f of f in schema M do is successful in most cases, it may not be able to deal with cases 2 if f isn’t materialized ∧ f = ⊥ then continue where field instances may be null. On the other hand, iterating 3 R˜ := (f = ⊥)? {D.Region} : CR[f .Color] over fields in a top-down topological order requires the user to 4 if f is a sequence-ancestor of f then provide examples for each field (including non-leaf fields), but it 5 ex := ∅ offers three advantages: (a) it provides an easy visualization for 6 foreach R ∈ R ˜ s.t. Subregions(R, R ˜1 ∪ R ˜2 ) = ∅ the user to inspect the results of the organization of various non- do leaf field instances, (b) it provides greater chance of success since 7 ex := ex ∪ the synthesis task for a field can now be enabled relative to any {(R, Subregions(R, R ˜ 1 ), Subregions(R, R ˜ 2 ))} ancestor field as opposed to the entire document, (c) it may also entail having to provide fewer examples for any field that is nested 8 P˜ := SynthesizeSeqRegionProg(ex) inside another field whose instances have all been identified. Hence, 9 else /* f is a structure-ancestor of f */ if the leaf field instances are never null and the user does not need 10 ex := ∅ help with identifying representative examples, the user may simply 11 foreach R ∈ R˜ s.t. Subregion(R, R ˜ 1 ) = ⊥ do provide few examples for each leaf field and FlashExtract may be 12 ˜ ex := ex ∪ {(R, Subregion(R, R))} able to automatically infer the organization of the various leaf field P˜ := SynthesizeRegionProg(ex) instances. Otherwise, we recommend that the user iterates over fields in a top-down topological order. foreach P ∈ P˜ do CR := CR ∪ {(f.Color, R) | R ∈ P R , R ∈ R} ˜ 13 if CR is consistent with M then return (f , P ) 4. Inductive Synthesis Framework 14 return ⊥ In this section, we describe a general framework for developing the Algorithm 2: Synthesize a field extraction program. inductive synthesis APIs (namely, SynthesizeSeqRegionProg and SynthesizeRegionProg) that enable the example-based user interaction model discussed in the previous section. We build this materialized. If the user is not happy with the inferred highlighting, framework over the inductive synthesis methodology proposed by the user provides additional positive instances (of regions that Gulwani et.al. [12] of designing appropriate DSLs and developing FlashExtract failed to highlight) or negative instances (of unintended algorithms to synthesize programs in those DSLs from examples. regions that FlashExtract highlighted) and the synthesis process is However, we go one step further. We identify an algebra of core repeated. The examples for a simplified field consist of at most operators that can be used to build various data extraction DSLs for a single positive instance (possibly null) inside each region of the various document types (§4.2). We also present modular synthesis immediate ancestor field that has been materialized. If the user is not algorithms for each of these operators in terms of the synthesis happy with the inferred highlighting, the user provides additional algorithms for its (non-atomic) arguments (§4.3)—this enables examples and the synthesis process is repeated. automatic generation of synthesis algorithms for any DSL that is The example-based interaction is enabled by the procedure Syn- constructed using our algebra of core operators. We start out by thesizeFieldExtractionProg described in Algorithm 2, which formalizing the notion of a data extraction DSL. takes as input a document D, a schema M , a highlighting CR of the document that is consistent with M , a non-materialized field ˜ 1 , and a set of negative instances 4.1 Data Extraction DSLs f , a set of positive instances R R˜ 2 (which is empty in case field f has been simplified). The pro- A data extraction DSL is represented by a tuple (G, N1 , N2 ). G cedure SynthesizeFieldExtractionProg returns a program P is a grammar that defines data extraction strategies. It contains such that (a) P is consistent with the examples and (b) the updated definitions for various non-terminals N . Each non-terminal N is highlighting that results from executing P is consistent with the defined as a ranked collection of rules (also referred to as N.RHSs) schema M . Line 2 finds a suitable ancestor f from CR that forms of the same type. The type of a non-terminal is the type of its rules. the learning boundary for f . The loops at lines 6 and 11 group the in- A rule consists of a fixed expression or an operator applied to other put examples into boundaries imposed by f -regions. Depending on non-terminals of appropriate types or fixed expressions. The type of the relationship between f and f , FlashExtract invokes an appropri- a rule is the return type of the fixed expression or the operator that ate API provided by our inductive synthesis framework. In particular, constitutes the rule. it invokes SynthesizeSeqRegionProg (line 8) to learn a program We say a non-terminal is synthesizable if each of its rules for extracting a sequence of f -regions in an f -region (if f is a either (a) involves an operator from our core algebra applied to sequence-ancestor of f ), or SynthesizeRegionProg (line 12) to fixed expressions or synthesizable non-terminals, or (b) involves an learn a program for extracting a single f -region in an f -region (if operator that is equipped with an inductive synthesis algorithm of f is a structure-ancestor of f ). Both SynthesizeSeqRegionProg its own (i.e., domain-specific operators), or (c) fixed expressions. and SynthesizeRegionProg actually return a sequence of pro- N1 is a distinguished (top-level) synthesizable non-terminal of grams of the right type. The loop at line 12 selects the first program type sequence of regions. N2 is another distinguished (top-level) P in this sequence (if it exists) that ensures that the updated high- synthesizable non-terminal of type region. lighting that results from executing P is consistent with the schema An expression generated by a non-terminal of type T can be M . We describe this framework and its APIs in the next section. viewed as a program with return type T . Note that the expressions An interesting aspect of the above-mentioned interaction is the generated by N1 represent SeqRegion programs and expressions order in which the user iterates over various fields. FlashExtract is generated by N2 represent Region programs. The DSL expressions flexible enough to let users extract various fields in any iteration may involve one distinguished free variable R0 (of type Region) that order. This is especially useful when the user dynamically decides denotes the input to the top-level SeqRegion or Region programs. to update the data extraction schema (e.g., extract more fields). Any other free variable that occurs in a DSL expression must be Iterating over fields in a bottom-up ordering offers an interesting bound to some lambda definition that occurs in a higher level expression.
6 . A state σ of a program P is an assignment to all free variables in Merge Operator A Merge operator takes as input a set of n P . We use the notation {x ← v} to create a state that maps variable sequence expressions, each of which is generated by the same x to value v. We use the notation σ[v/x] to denote setting the value non-terminal A (of some type of the form List T ). The return of variable x to value v in an existing state σ. We use the notation value of Merge also has the same type as that of A. The Merge P σ to denote the result of executing the program P in state σ. operator combines the results of these n expressions together—this Next, we discuss the core operators in our algebra that can be is useful when a single expression cannot extract all intended regions. used to build data extraction DSLs. This operator is a disjunctive abstraction and allows extraction of multiple-format field instances by merging several single-format 4.2 Core Algebra for Constructing Data Extraction DSLs field instances. Its semantics is as follows: Our core algebra is based around certain forms of map, filter, Merge(A1 , . . . , An ) σ = MergeSeq( A1 σ, . . . , An σ) merge, and pair operators. The pair operator (which returns a scalar) constitutes a scalar expression, while the other operators (which The MergeSeq operation merges its argument sequences with return a sequence) constitute a sequence expression. respect to the order of their elements’ locations in the original file. Decomposable Map Operator A Map operator has two arguments Pair Operator A Pair operator has two arguments A and B and λx : F and S, where S is a sequence expression of type List T has the following standard pair operator semantics. and F is some expression of type T and uses an additional free Pair(A, B) σ = ( A σ, B σ) variable x. The return type of Map is List T . Map(λx : F, S) has the standard semantics, wherein it applies function F to each The pair operator allows constructing region representations from element of the sequence produced by S to construct the resultant smaller elements. For example, we can create a text region from a sequence. pair of its start and end positions. Map(λx : F, S) σ = [t0 , . . . , tn ], where 4.3 Modular Synthesis Algorithms n = |Y − 1|, ti = F (σ[Y [i]/x]), Y = S σ The API SynthesizeSeqRegionProg takes as input a set of ex- We say that a Map operator is decomposable (w.r.t. the underlying amples, each of which consists of a triple (R, R˜1 , R˜2 ), where R DSL, which defines the language for F and S) if it has the following denotes the input region, R˜1 denotes positive instances of the re- property: For any input state σ and a sequence Y , there exists a gions that should be highlighted within R, and R˜2 denotes negative sequence Z such that instances of the regions that should not be highlighted within R. ∀F, S : Y Map(F, S) σ =⇒ Z S σ ∧ Map(F, Z) σ = Y The API SynthesizeRegionProg takes as input a set of examples, where denotes the subsequence relationship. Let Decompose be each of which consists of a pair (R, R ), where R denotes the input a function that computes such a witness Z, given σ and Y . It has region and R denotes the output region. Both these APIs return an the following signature: ordered set of programs in the DSL, each of which is consistent with the provided examples. Fig. 6 contains the pseudo-code for these Map.Decompose : (Region × List T ) → List T APIs, which we explain below. The Decompose function facilitates the reduction of examples for The method SynthesizeSeqRegionProg first learns from only Map operator to examples for its arguments F and S, thereby positive instances by invoking the learn method of the top-level reducing the task of learning the desired Map expression from sequence non-terminal N1 (line 2) and then selects those programs examples to the sub-tasks of learning F and S expressions from that additionally satisfy the negative instances constraint (loop at respective examples. line 4). The operator :: appends an element to a list. The method SynthesizeRegionProg simply invokes the learn method of the Filter Operators Our algebra allows two kinds of filter operators top-level region non-terminal N2 (line 10). over sequences, one that selects elements based on their properties The learn method for any non-terminal N simply involves (FilterBool), and the other one that selects elements based on invoking the learn method associated with its various rules (line 13) their indexes (FilterInt). and then returning the ordered union of the sequences of the A FilterBool operator has two arguments λx : B and programs synthesized from each. The operator + + performs list S, where S is a sequence expression of type List T and B concatenation. is a Boolean expression and uses an additional free variable x. We next briefly describe the key insights behind the learn The return type of FilterBool is List T . FilterBool(λx : methods for the operators that constitute the various rules. The F, S) has the standard filtering semantics: it selects those el- higher level key idea is to define them in terms of the learn methods ements from S that satisfy B. For example, if S is the set of their arguments. This allows for a free synthesis algorithm for of all lines in Ex. 1, then the expression FilterBool(λx : any data extraction DSL. EndsWith([Number, Quote], x), S) selects all yellow lines. The predicate EndsWith([Number, Quote], x) returns true iff the string Learning Decomposible Map Operator The key idea here is x ends with a number followed by a double quote. to first find the witness Zj for each example (σj , Yj ) using the A FilterInt operator has three arguments: a non-negative operator’s Decompose method (line 16). This allows us to split the integer init, a positive integer iter, and a sequence expression S. problem of learning a map expression into two independent simpler Its return value also has the same type as that of S. The FilterInt sub-problems, namely learning of a scalar expression F (line 18), operator takes every iter elements from S starting from init as and learning of a sequence expression S (line 20) from appropriate the first element. Its semantics is as follows: examples. The returned result is an appropriate cross-product style composition of the results returned from the sub-problems (line 25). FilterInt(init, iter, S) σ = let L = S σ in Filter λx : (indexof(L, x) − init)%iter = 0, L Learning Merge Operator The key idea here is to consider all (minimal) partitioning of the examples such that the learn method of For example, FilterInt(1, 2, S) selects all elements at odd indices the argument for each partition returns a non-empty result. The set T from a sequence. (at line 28) holds all such minimal partitions. For each such partition The two kinds of filter operators can be composed to enable (loop at line 30), we invoke the learn method of the argument for sophisticated filtering operations. each class in the partition (line 31), and then appropriately combine
7 . function SynthesizeSeqRegionProg ( function FilterBool.Learn (Set (State, List T ) Q) : Set (Region, List Region , List Region ) Q) : List P rog is List P rog is 1 Q := {({R0 ← R}, R˜1 ) | (R, R˜1 , R˜2 ) ∈ Q} /* Let B and S be the predicate and sequence /* learn with positive examples */ arguments of FilterBool. */ 2 P˜ := N1 .Learn(Q ) /* start symbol for sequence */ 35 P˜1 := S.Learn(Q) /* learn sequence S */ 36 Q := {(σ[Y [i]/x], True) | (σ, Y ) ∈ Q, 0 ≤ i < len(Y )} 3 P˜ := [] 37 P˜2 := B.Learn(Q ) /* learn filter B */ 4 foreach P ∈ P˜ do 5 if (∃(R, R˜1 , R˜2 ) ∈ Q : ( P {R0 ← R}) ∩ R˜2 = ∅) then 38 P˜ := [] 6 continue /* P violate negative instances */ 39 foreach P1 ∈ P˜1 do 40 foreach P2 ∈ P˜2 do 7 P := P˜ :: P ˜ 41 P˜ := P˜ :: “FilterBool(P1 , P2 )” 8 return P˜ 42 return CleanUp(P˜ , Q) function SynthesizeRegionProg (Set (Region, Region) Q) : List P rog is function FilterInt.Learn (Set (State, List T ) Q) : 9 Q := {({R0 ← R}, R ) | (R, R ) ∈ Q} List P rog is 10 return N2 .Learn(Q ) /* start symbol for region */ /* Let S be the sequence argument of FilterInt. */ Let Q be {(σj , Yj )}1≤j≤m function N.Learn(Set (State, T ) Q) : List P rog is 43 P˜1 := S.Learn(Q) /* learn sequence S */ 11 P˜ := [] 44 P˜ := [] 12 foreach C ∈ N.RHSs do 45 foreach P1 ∈ P˜1 do 13 P˜ := P˜ ++ C.Learn(Q) 46 init := ∞ 47 iter := 0 14 return P˜ 48 for j := 1 . . . m do 49 Zj := P1 σj function Map.Learn (Set (State, List T ) Q) : List P rog is 50 init := Min(init, indexof(Zj , Yj [0])) /* Let F and S be the function and sequence 51 for i := 0 . . . |Yj | − 2 do arguments of Map. */ 52 t := indexof(Zj , Yj [i + 1]) − indexof(Zj , Yj [i]) Let Q be {(σj , Yj )}1≤j≤m 53 if iter = 0 then iter := t 15 for j := 1..m do /* find witnesses Z */ 54 else iter := GCD(iter, t) 16 Zj := Map.Decompose(σj , Yj ) 17 Q1 := {(σj [Zj [i]/x], Yj [i]) | 0 ≤ i < len(Zj ), 1 ≤ j ≤ m} 55 if iter = 0 then iter := 1 18 P˜1 := F.Learn(Q1 ) /* learn Map’s function F */ 56 P˜ := P˜ :: “FilterInt(init, iter, P1 )” 19 Q2 := {(σj , Zj ) | 1 ≤ j ≤ m} 57 return CleanUp(P˜ , Q) 20 P˜2 := S.Learn(Q2 ) /* learn Map’s sequence S */ 21 P˜ := [] function Pair.Learn (Set (State, (T1 , T2 )) Q) : List P rog is 22 foreach P1 ∈ P˜1 do /* Let A and B be the two arguments of Pair. */ 23 foreach P2 ∈ P˜2 do Let Q be {(σj , (uj , uj ))}1≤j≤m 24 P˜ := P˜ :: “Map(P1 , P2 )” 58 Q1 := {(σj , uj )}1≤j≤m ; Q2 := {(σj , uj )}1≤j≤m 59 P˜1 := A.Learn(Q1 ) 25 return CleanUp(P˜ , Q) 60 P˜2 := B.Learn(Q2 ) 61 if P˜1 = ∅ or P˜2 = ∅ then return [] function Merge.Learn (Set (State, List T ) Q) : List P rog is /* Let A be the non-terminal that forms the 62 P˜ := [] arguments of Merge. */ 63 foreach P1 ∈ P˜1 do Let Q be {(σj , Yj )}1≤j≤m 64 foreach P2 ∈ P˜2 do /* X holds all possible subsets of examples */ 65 P˜ := P˜ :: “Pair(P1 , P2 )” 26 X := {Q | Q = {(σj , Yj )}1≤j≤m , 66 return P˜ ∀1 ≤ j ≤ m : Yj Yj , A.Learn(Q ) = []} 27 Y := Yj /* all positive examples */ (σj ,Yj )∈Q function CleanUp(List P rog P˜ , Set (State, List T ) Q) : List P rog is /* T includes partitions that cover all examples */ 67 P˜ := [] 28 T := X | X is a minimal subset of X s.t. 68 foreach i = 1 to |P˜ | do {Yj | (σj , Yj ) ∈ Q , Q ∈ X } = Y 69 P := P [i] 29 P˜ := [] 70 incl := true 30 foreach X ∈ T ordered by size do 71 foreach k = 1 to |P˜ | do Let Q1 , ..., Qn be the various elements of X 72 if (P˜ [k] subsumes P w.r.t. Q) and ((P does not subsume 31 P˜1 , . . . , P˜n := A.Learn(Q1 ), . . . , A.Learn(Qn ) P˜ [k] w.r.t. Q) or k < i) then incl := false 32 P˜ := P˜ ++ {“Merge(P1 , ..., Pn )” | ∀1 ≤ i ≤ n : Pi ∈ P˜i } 73 if (incl = true) then P˜ := P˜ :: P 33 return CleanUp(P˜ , Q) 74 return P˜ Figure 6: Modular inductive synthesis algorithm in FlashExtract.
8 .the results. Although this algorithm is exponential in the size of T HEOREM 2 (Completeness). If there exists some program that is the input set, it works efficiently in practice because the number of consistent with the input set of examples, SynthesizeSeqRegion- examples required for learning is very small in practice. Prog (and SynthesizeRegionProg) produce one such program. Learning Filter Operators The key idea in the learn method for The proof of Theorem 2 follows from two key observations: (a) FilterBool is to independently learn an ordered set P˜1 of sequence The learn methods associated with the scalar non-terminals and expressions (line 35) and an ordered set P˜2 of Boolean expressions the (scalar) Pair operator satisfy a similar completeness property. (line 37) that are consistent with the given examples. The returned (b) The learn methods associated with the sequence non-terminals result is an appropriate cross-product style composition of the sub- and the sequence operators of the core algebra satisfy a stronger results P˜1 and P˜2 . completeness theorem stated below (Theorem 3). The key idea in the learn method for FilterInt is to first learn an ordered set P˜ of sequence expressions (line 43) that are consistent T HEOREM 3 (Strong Completeness). The learn methods associ- with the given examples. Then, for each such program, we learn ated with the sequence non-terminals and the sequence operators the most strict filtering logic that filters as few elements as possible of the core algebra (namely Map, FilterBool, FilterInt, and Merge) while staying consistent with the examples. In particular, we select satisfy the following property: “For every sequence program P that init to be the minimum offset (across all examples) of the first is consistent with the input set of examples, there exists a program element in Yj in the sequence Zj returned by executing the sequence P in the learned set of programs that subsumes P .” program in the example state σj (line 50). We select iter to be the The proof of Theorem 3 follows from induction on the DSL’s GCD of all distances between the indices of any two contiguous structure. Note that the CleanUp optimization only removes those elements of Yj in Zj (line 54). (lower-ranked) programs that are subsumed by other programs. Learning Pair Operator The key idea here is to invoke the learn method of the first (second) argument at line 59 (line 60) to learn 5. Instantiations programs that can compute the first (second) element in the various output pairs in the examples from the respective inputs. The final We now present instantiations of our general framework to three result is produced by taking a cross-product of the two sets of different data extraction domains: text files, webpages, and spread- programs that are learned independently (loop at line 63). sheets. For each domain, we define the notion of a region, the do- main’s underlying data extraction DSL, and discuss the implementa- CleanUp Optimization An important performance and ranking tion of its domain-specific learn methods. optimization employed by the learn methods of various operators is use of the CleanUp method, which removes those programs that 5.1 Text Instantiation extract more regions than some retained program. More precisely, A region in this domain is a pair of character positions in the input this method removes each of those (lower-ranked) programs from text file. an ordered set of programs that is subsumed by some unremoved program (See Def. 6). Note that this does not affect the completeness Language Ltext Fig. 7 shows the syntax of Ltext , our data extrac- property associated with the various learning methods (Th. 3). Fur- tion DSL for this domain. The core algebra operators are in bold. thermore, it implements an important ranking criterion that assigns We name the various Map operators differently in order to associate higher likelihood to the scenario wherein the user provides consec- different Decompose methods with them. The non-terminal N1 is utive examples from the beginning of any immediately enclosing a Merge operator over constituent sequence expressions SS. The ancestral region (as opposed to providing arbitrary examples). non-terminal N2 is defined as a Pair operator over two position expressions. 4.4 Correctness The position expression Pos(x, p) evaluates to a position in We now describe the correctness properties associated with our two string x that is determined by the attribute p (inspired by a similar key synthesis APIs: SynthesizeSeqRegionProg and Synthesiz- concept introduced in [10]). The attribute p is either an absolute eRegionProg. First, we state some useful definitions. position k, or is the kth element of the position sequence identified by the regex pair rr which consists of two regexes r1 and r2 . The D EFINITION 5. (Consistency) A scalar program P (i.e., a program selection order is from left-to-right if k is positive, or right-to-left if that returns a scalar value) is said to be consistent with a set k is negative. The position sequence identified by (r1 , r2 ) in string Q = {(σj , uj )}j of scalar examples if ∀j : uj = P σj . A x, also referred to as PosSeq(x, rr), is the set of all positions k sequence program P (i.e., a program that returns a sequence) in x such that (some suffix of the substring on) the left side of k is said to be consistent with a set Q = {(σj , Yj )}j of sequence matches with r1 and (some prefix of the substring on) the right side examples with positive instances if ∀j : Yj ⊆ P σj . A sequence of k matches with r2 . A regex r is simply a concatenation of (at program P is said to be consistent with a set Q = {(σj , Yj , Yj )}j most 3) tokens. A token T is a pre-defined standard character class of sequence examples with positive and negative instances if ∀j : such as alphabets, digits, colon character, etc. (We used 30 such (Yj ⊆ P σj ∧ Yj ∩ P σj = ∅). tokens in our instantiation). We also define some context-sensitive D EFINITION 6. (Subsumption) Given a set Q = {(σj , Yj )}j of tokens dynamically based on frequently occurring string literals in sequence examples with positive instances, and two sequence pro- the neighborhood of examples highlighted by the user. For instance, grams P1 , P2 that are consistent with Q, we say that P1 subsumes in Ex. 1, our dynamically learned tokens include the string “DLZ P2 w.r.t. Q if ∀j : P1 σj ⊆ P2 σj . - Summary Report” (which is useful for learning the green outer structure boundary) and the string “"Sample ID:,""” (which is useful The following two theorems hold. to extract the orange sample ID). The first rule of SS consists of a Map operator LinesMap that T HEOREM 1 (Soundness). The programs P returned by Synthe- maps each line of a line sequence LS to a pair of positions within sizeSeqRegionProg and SynthesizeRegionProg are consis- that line. The Decompose method for LinesMap takes as input a tent with the input set of examples. region R and a sequence of position pairs and returns the sequence The proof of Theorem 1 follows easily by induction (on the structure of lines from R that contain the corresponding position pairs. of the DSL) from similar soundness property of the learn methods The second (third) rule of SS pairs each position x in a position associated with the non-terminals and core algebra operators. sequence P S with a position that occurs somewhere on its right (left)
9 . Disjuctive Pos Pair Seq N1 ::= Merge(SS1 , . . . , SSn ) Disjuctive Seq N1 ::= Merge(N S1 , . . . , N Sn ) Pos Pair Region N2 ::= Pair(Pos(R0 , p1 ), Pos(R0 , p2 )) | Merge(SS1 , . . . , SSn ) Pair Seq SS ::= LinesMap(λx : Pair(Pos(x, p1 ), Pos(x, p2 )), LS) Region N2 ::= XPath | Pair(Pos(R0 , p1 ), Pos(R0 , p2 )) | StartSeqMap(λx : Pair(x, Pos(R0 [x : ], p)), P S) Node Seq N S ::= XPaths | EndSeqMap(λx : Pair(Pos(R0 [ : x], p), x), P S) Pos Pair Seq SS ::= Line Seq LS ::= FilterInt(init, iter, BLS) SeqPairMap(λx : Pair(Pos(x.Val, p1 ), Pos(x.Val, p2 )), ES) Bool Line Seq BLS ::= FilterBool(b, split(R0 , ‘\n’)) | StartSeqMap(λx : Pair(x, Pos(R0 [x : ], p)), P S) Position Seq P S ::= LinesMap(λx : Pos(x, p), LS) | EndSeqMap(λx : Pair(Pos(R0 [ : x], p), x), P S) | FilterInt(init, iter, PosSeq(R0 , rr)) Element Seq ES ::= FilterInt(init, iter, XPaths) Predicate b ::= λx : True Position Seq P S ::= FilterInt(init, iter, PosSeq(R0 , rr)) | λx : {Starts,Ends}With(r, x) | λx : Contains(r, k, x) Figure 8: The syntax of Lweb , the DSL for extracting webpages. | λx : Pred{Starts,Ends}With(r, x) | λx : PredContains(r, k, x) Definitions of p and rr are similar to those in Fig. 7. | λx : Succ{Starts,Ends}With(r, x) | λx : SuccContains(r, k, x) Position Attribute p ::= AbsPos(k) | RegPos(rr, k) them to dynamic tokens. The PosSeq operator returns the sequence of all end positions of the magenta sequence (since each of these Regex Pair rr ::= (r1 , r2 ) Regex r ::= T {0, 3} have an r1 match on the left and an r2 match on the right). Note that Token T ::= C+ | DynamicToken there are other positions that either have an r1 match on the left (such as the position before the number in "Sample ID:;""5007-01"""), or Figure 7: The syntax of Ltext , the DSL for extracting text files. have an r2 match on the right (such as the position after the character L in ""ug/L"",0.0009), but not both; hence, these positions are not side. The notation R0 [x : ] (R0 [ : x]) denotes the suffix (prefix) of selected by the PosSeq operator. Since FilterInt does not filter the text value represented by R0 starting (ending) at position x. The any elements, P S is the same sequence returned by the regex pair. Decompose method associated with StartSeqMap (EndSeqMap) The map function in EndSeqMap takes each end position in P S takes as input a region R and a sequence of positions and maps each and finds its corresponding start position specified by p, which is position k in the input sequence to the string R[k : ] (R[ : k]). the first position from the right (k = -1) that matches the dynamic The line sequence non-terminal LS uses a nested combination token (,"") on the left side. The result is the magenta sequence. of FilterInt and FilterBool. The various choices for predicate b (used in FilterBool) have the expected semantics. For exam- E XAMPLE 6. If the magenta field is wrapped within the yellow ple, StartsWith(r, x) asserts if line x starts with regex r, while structure, one of its extraction programs is as follows: Contains(r, k, x) asserts if line x contains k occurrences of regex Pair(Pos(R0 , p1 ), Pos(R0 , p2 )), where r. We also take hints from preceding and succeeding lines via Pred* p1 = [DynamicTok(,"")], , 1 , p2 = , [DynamicTok("",)], 1 and Succ* predicates. For example, PredStartsWith(r, x) asserts Since the yellow field is the structure-ancestor of the magenta if the line preceding x in the input text file ends with regex r. field, FlashExtract learns a Pair operator to extract a magenta The position sequence non-terminal P S includes expressions region within a yellow region. The start position of this pair is the that select a position within each line of a line sequence (using the first position from the left (k = 1) that matches (,"") on the left LinesMap operator) or that filter positions returned by the PosSeq side (r1 ), and the end position is the first position from the left that operator (using the FilterInt operator). matches ("",) on the right side (r2 ). This program is simpler than the one in Ex. 5, because it exploits the separation determined by E XAMPLE 4. Below is a program in Ltext for extracting the yellow the program for the yellow field. regions in Ex. 1 (from the top-level region of the entire file). LinesMap(λx : Pair(Pos(x, p1 ), Pos(x, p2 )), LS), where Domain-Specific Learn Methods The learning of Boolean expres- p1 = AbsPos(0), p2 = AbsPos(−1), sion b is performed using brute-force search. The learning of position LS = FilterInt(0, 1, attribute expressions p is performed using the technique described FilterBool(λx : EndsWith([Number, Quote], x), split(R0 , ‘\n’))) in prior work [10]. The FilterBool operator takes all the lines in the document 5.2 Webpage Instantiation and selects only those that end with a number and a quote. The A region in this domain is either an HTML node, or a pair of FilterInt operator does not do any filtering (init = 0, iter = 1); character positions within the text property of an HTML node. it simply passes the result of FilterBool to LS. The map function in LinesMap returns the entire input line (AbsPos (0) denotes the Language Lweb Fig. 8 shows the syntax of the DSL Lweb for beginning of the line, while AbsPos (-1) denotes the end of the line). extracting data from webpages. XPath (XPaths) denote an XPath The LinesMap operator thus returns a sequence identical to LS, expression that returns a single HTML node (a sequence of HTML which is the yellow sequence. nodes) within the input HTML node. Position attribute p and regex pair rr are similar to those in the text instantiation DSL Ltext . Our E XAMPLE 5. Below is a program for extracting the magenta re- token set additionally includes dynamic tokens that we create for gions in Ex. 1 (from the top-level region of the entire file). the various HTML tags seen in the input webpage. EndSeqMap(λx : Pair(Pos(R0 [ : x], p), x), P S), where The non-terminal N1 represents expressions that compute a p = RegPos ([DynamicTok(,"")], ), −1 sequence of HTML nodes or a sequence of position pairs within P S = FilterInt(0, 1, PosSeq(R0 , (r1 , r2 )), and HTML nodes. The non-terminal N2 represents expressions that r1 = [DynamicTok(,""), Word], compute a HTML node or a position pair within a HTML node. The r2 = [DynamicTok("",), Number, Comma], design of Lweb is inspired by the design of Ltext . HTML nodes in Lweb play a similar role to that of lines in Ltext . We use XPath FlashExtract recognizes the prefixes (,"") and suffixes ("",) of expressions to identify relevant HTML elements instead of using the given examples as frequently occurring substrings and promotes regular expressions to identify appropriate lines.
10 . Disjuctive Cell Pair Seq N1 ::= Merge(P S1 , . . . , P Sn ) force search to find all matching cell expression cb. Learning the | Merge(CS1 , . . . , CSn ) row expression rb is similar. Cell Pair Region N2 ::= Pair(Cell(R0 , c1 ), Cell(R0 , c2 )) | Cell(R0 , c) 6. Evaluation Pair Seq P S ::= StartSeqMap(λx : Pair(x, Cell(R0 [x :], c)), CS) We implemented FlashExtract framework and its three instantiations | EndSeqMap(λx : Pair(Cell(R0 [: x], c), x), CS) (described in §5) in C#. We conducted all experiments on a machine running Windows 7 with Intel Core i7 2.67GHz, 6GB RAM. Via Cell Sequence CS ::= FilterInt(init, iter, CE) this evaluation, we seek to answer the following questions related to | CellRowMap(λx : Cell(x, c), RS) effectiveness of FlashExtract. Row Sequence RS ::= FilterInt(init, iter, RE) • Can FlashExtract describe DSLs that are expressive enough for Cell Attribute c ::= AbsCell(k) | RegCell(cb, k) extraction tasks on real world files? • How many examples are required to extract the desired data? Cell Split Seq CE ::= FilterBool(cb, splitcells(R0 )) • How efficient is FlashExtract in learning extraction programs? Row Split Seq RE ::= FilterBool(rb, splitrows(R0 )) Cell Boolean cb ::= λx : True | λx : Surround(T {9}, x) Real-world Benchmarks We collected 75 test documents in total, Row Boolean rb ::= λx : True | λx : Sequence(T +, x) 25 for each of the three domains. The text file domain is very broad. A text file might be relatively structured such as a log file, or Figure 9: The syntax of Lsps , the DSL for extracting spreadsheets. loosely structured as in text copied and pasted from webpages or PDF documents. We selected a representative benchmark set that The non-terminal SS represents expressions that generate a includes a few files for each kind. Additionally, we also included sequence of position pairs by mapping each HTML node in a some benchmarks from the book “Pro Perl Parsing” [9], which sequence ES to position pairs (SeqPairMap operator) or by teaches how to use Perl to parse data. pairing up each position in a sequence P S of positions with another For the webpage domain, we used the benchmark from [20], position computed relative to it (StartSeqMap and EndSeqMap which describes SXPath, an extension of XPath to perform queries operators). on Web documents. This benchmark includes 25 e-commerce popular websites with different underlying structures. For each of Domain-specific Learn Methods We need to define learn meth- the website, they have two test cases corresponding to the HTML ods for XPath and XPaths from example HTML nodes. This is a elements of the product name and the product price. In addition well-defined problem in the data mining community, called wrapper to these, we add a test case for the region covering all product induction (see §7). We implemented a learn method that general- information, and another test case for the actual price number izes example nodes to path expressions by replacing inconsistent (ignoring other texts that may occur in the price element such as tags at any depth with “*”, and additionally incorporates common “sale”, “$” or “USD”). properties of example nodes. These properties include the number For the spreadsheet domain, we obtained 25 documents from of children, their types, the number of attributes and their types. The two sources: benchmark used in previous work on spreadsheet result is a list of XPath expressions, ranging from the most specific transformation [13] (we selected those 7 documents from this to the most general. benchmark that were associated with non-trivial extraction tasks), and EUSES corpus [15]. 5.3 Spreadsheet Instantiation A region in this domain is a rectangular region represented by a pair Experimental Setup For each document, we wrote down an ap- of cells or a single cell. propriate schema describing the type of the hierarchical data inside the document, and we manually annotated all instances for the vari- Language Lsps Fig. 9 shows the syntax of our DSL. The non- ous fields in that schema to precisely define the extraction task. We terminal N1 represents expressions that extract a sequence of cell used FlashExtract to learn the extraction programs for each field. pairs or a sequence of cells from a given spreadsheet. The non- Recall that we can learn the extraction logic for a field f by relating terminal N2 represents expressions that extract a cell pair or a single it to any of its ancestors. Among these, relating to ⊥, is typically the cell from a given spreadsheet. hardest (in contrast, relating to one of the other ancestors can exploit Lsps is inspired by Ltext and Lweb . The notion of a row in the separation that has already been achieved by the extraction logic Lsps is similar to that of a line in Ltext and an HTML node in for that ancestor). Lweb . Just as Boolean expressions in Ltext help filter lines by We wrote a script to simulate user interaction with FlashExtract matching their content against regular expressions, the row Boolean to measure its effectiveness in the above-mentioned hardest scenario. expression rb in Lsps selects spreadsheet rows by matching contents Let R ˜ denote all manually annotated instances for a field f . The of consecutive cells inside a row against some token sequence. In simulator starts out with an empty set of negative instances and a addition, the cell Boolean expression cb selects cells by matching ˜ singleton set of positive instances that includes the first region in R, contents of the cell and its 8 neighboring cells against some 9 tokens. and repeats the following process in a loop. In each iteration, the As their name suggests, the two functions splitcells and simulator invokes FlashExtract to synthesize a field extraction pro- splitrows split a spreadsheet region into a sequence of cells gram from the various examples. If FlashExtract fails to synthesize (obtained by scanning the region from top to down and left to right) a program, the simulator declares failure. Otherwise, the simulator and into a sequence of rows respectively, on which cell and row executes the program to find any mismatch w.r.t. the golden result Boolean expressions can be applied. ˜ If no mismatch exists, the simulator declares success. Otherwise, R. Domain-specific Learn Methods The learning of Boolean expres- the simulator adds the first mismatched region as a new example: sion cb and rb is performed using brute-force search. The learning it is added as a positive instance if the mismatched region occurs of cell attribute expressions c is performed in a manner similar to in R˜ but is not highlighted in the execution result of the previous that of position attribute expressions p in Ltext . interaction; otherwise the mismatch is added as a negative example. We need to define learn methods for domain-specific top-level Furthermore, the simulator also adds all new highlighted regions non-terminals c, cb and rb. To learn c, we simply perform brute- that occur before the new example as positive instances.
11 . 7 7 7 6 Text Webpages 6 Spreadsheet 6 5 5 5 # examples examples examples 4 4 4 3 3 3 2 2 2 1 1 1 0 0 0 speechbench split awk mgx nozzle pop13 techfest addresses chairs horses ls-l papers users accounts pldi12 pldi13 quotes banks companies ucla-faculty countries namephone numbertext hadoop instruments hg_ex2 hg_ex3 2003Fall bali deliverable* flip_usd5 SOA4-YEAR* _h8d62ck1* 64040 ch15_e compliance* e_Bubble_* young_table hg_ex12 hg_ex18 hg_ex26 hg_ex29 hg_ex39 03PFMJOU* anrep9899* Funded - F ge-revenues HOSPITAL* DataDiction* pwpSurvey* buy apple mgzoutlet abt bol cnet dealtime mediaworld googlepdct bigtray nthbutsw target barnes ebay powells shopping tigerdirect amazon cameraword drugstore yahooshop bestbuy shopzilla venere cooking-bw Figure 10: Average number of examples (solid/white bars represent positive/negative instances) across various fields for each document. 2 2 2 6.6 9.4 1.8 Text 3.7 1.8 2.3 Webpages 3.5 3.4 1.8 Spreadsheet 4.8 7.1 4.7 2.3 1.6 1.6 1.6 1.4 1.4 1.4 1.2 1.2 1.2 seconds seconds seconds 1 1 1 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.2 0 0 0 split awk mgx nozzle pop13 accounts companies horses namephone pldi12 pldi13 addresses numbertext quotes speechbench users chairs banks ls-l papers techfest ucla-faculty countries hadoop instruments 64040 ge-revenues hg_ex2 hg_ex3 bali ch15_e e_Bubble_* anrep9899* pwpSurvey* hg_ex12 hg_ex18 hg_ex26 hg_ex29 hg_ex39 _h8d62ck1* 2003Fall deliverable* 03PFMJOU* flip_usd5 Funded - F HOSPITAL* SOA4-YEAR* young_table compliance* DataDiction* apple buy bol cnet target abt mediaworld nthbutsw googlepdct barnes bigtray cameraword dealtime mgzoutlet powells shopping yahooshop amazon bestbuy drugstore ebay shopzilla venere tigerdirect cooking-bw Figure 11: Average learning time of the last interaction across various fields for each document. Expressiveness Each of the three instantiations of FlashExtract significantly, prior synthesis techniques are specialized to an under- was successfully able to synthesize a desired field extraction pro- lying DSL, while ours is more general and can be applied to any gram for the various tasks in the respective domains. Thus, FlashEx- DSL that is constructed using our core algebra. tract supports data extraction DSLs that are expressive enough to FlashExtract’s extraction capability actually complements the describe a variety of real-world data extraction tasks. transformation capability of prior work; in fact, we have combined them together to provide a better end-to-end user experience. For Number of Examples FlashExtract required an average of 2.86 example, after using FlashExtract to extract data from a text file, the examples per field across all documents. Fig. 10 shows the average user can perform string transformations [10] or number transforma- number of examples (over all fields in the schema for a given tions [22] to modify the extracted fields. Our prototype even allows document), split by positive/negative instances, for each of the in-place editing by examples: FlashExtract is used to highlight re- 75 documents in our benchmark. We observe that users have to gions that need to be edited repetitively, and string transformation give more examples to extract data from text files because the techniques [10] are used to perform transformation on leaf regions structure of text files is more arbitrary. In contrast, webpages and (and these changes are pushed back to the underlying document). spreadsheets are generally more structured (because of HTML tags and row/column based organization respectively), thus requiring Region Highlighting by Examples LAPIS [18] allows users to fewer examples. highlight sequences of regions in text files and webpages using a variety of means including examples (in addition to user-specified Synthesis Time Users interactively give more examples to FlashEx- grammars and regular expressions), and then manipulate them using tract in order to learn a field extraction program. We measure the a region algebra. However, LAPIS is limited in its ability to allow synthesis time required by FlashExtract during the last iteration users to create and manipulate hierarchical structure by examples. before FlashExtract succeeds or fails (since this last iteration has the STEPS [24] allows users to color and edit hierarchical regions in most number of examples, and thus typically consumes the longest text files using small mock examples. In contrast, we allow the synthesis time). FlashExtract required an average of 0.82 seconds user to provide examples on the original input document and not per field across all documents. Fig. 11 reports the average synthesis bother about creating mock examples. In particular, we allow the time (over all fields in the schema for a given document) from the user to provide us a small number of positive instances of the region last set of examples, for each of the 75 documents in our benchmark. that they want to highlight on the document. (The user-provided While most text files and webpages require less than a second per instances need not be a strict prefix of the set of all positive instances. field, spreadsheets sometimes take a few seconds to complete. This This flexible interaction is enabled by allowing the user to also is because the spreadsheet DSL is richer with a larger search space. indicate negative instances). Besides the user interface, another key difference is how the 7. Related Work underlying learning algorithms operate. LAPIS performs light- weight inference (enumerative search over pre-defined patterns) We survey some recent work on programming by examples, and that does not even leverage the document decomposition (when then discuss work related to each of our instantiation domains. extracting fields with a hierarchical arrangement). STEPS uses machine learning techniques to identify an appropriate composition DSL Programming by Examples Programming by Example of a given set of components. In contrast, we perform a systematic (PBE) techniques often construct a program in an underlying DSL search for programs using mostly a divide-and-conquer paradigm from a set of input/output examples [11]. This area has been gaining where we reduce the problem of learning expressions with a certain renewed interest because recent advances in PBE can help improve top-level operator to the problem of learning sub-expressions based the productivity of millions of end-users [12]. For instance, Gulwani on the properties of that operator. This enables efficient synthesis of et al. have developed techniques that enable end-users to perform larger and more sophisticated field extraction programs. string transformations [10, 21], number transformations [22], and ta- ble transformations [13] from examples. While prior work addresses Data Extraction from Log Files The PADS project [7] has en- transformations, we address a different problem of extraction. More abled simplification of ad hoc data processing tasks for programmers
12 .by contributing along several dimensions: development of domain We foresee two interesting directions for future work. (a) Extend- specific languages for describing text structure or data format, learn- ing our framework to enable data extraction from more sophisticated ing algorithms for automatically inferring such formats [8], and a document types such as PDF documents and images. (b) Designing markup language to allow users to add simple annotations to enable a good debugging environment that helps users identify potential more effective learning of text structure [23] While PADS supports discrepancies in what the user intended and the result produced by parsing of entire files, FlashExtract allows users to extract only parts the synthesized program. This is especially important in two sce- of the file thereby avoiding unnecessary complications. PADS’s narios: (i) when the synthesized program might be executed against learner only supports a fixed line-by-line chunking strategy to split other documents with similar formatting, (ii) when the initial set the records; in contrast, FlashExtract can learn chunking (aka, struc- of user-provided examples might be used to re-synthesize a new ture boundaries) from examples, making it suitable for extracting extraction program in light of formatting changes in the initial doc- data fields and records that have arbitrary length (and might cross ument (in fact, the potential for robustness to changes in the input multiple lines). Finally, PADS primarily targets ad hoc text files. structure might make PBE a more attractive alternative to traditional Although one can view webpages and spreadsheet as text files, it programming in the first place!) is unclear if the PADS learning algorithm can be adapted to work effectively for webpages and spreadsheets. References Data Extraction from Webpages Wrappers are procedures to [1] OpenRefine. http://openrefine.org/. extract data from Internet resources. Wrapper induction is the [2] R. Abraham and M. Erwig. Header and unit inference for spreadsheets method to automatically construct wrappers [17]. There has been through spatial analyses. In VL/HCC, 2004. a wide variety of work in this area, ranging from supervised sys- [3] T. Anton. Xpath-wrapper induction by generalizing tree traversal tems [14, 17, 19], semi-supervised systems [4], to unsupervised patterns. In LWA, 2005. systems [5]. FlashExtract differs from the above systems in that its [4] C.-H. Chang and S.-C. Lui. Iepad: information extraction based on users induce wrappers by interactively giving multiple positive/nega- pattern discovery. In WWW, 2001. tive examples. In that sense, FlashExtract is similar to [3]. However, [5] V. Crescenzi, G. Mecca, and P. Merialdo. Roadrunner: Towards the system in [3] only learns XPath expressions to extract HTML el- automatic data extraction from large web sites. In VLDB, 2001. ements. By defining other sequence operators to handle non-HTML text (i.e., text that is within a tag), FlashExtract supports finer grain [6] J. Cunha, J. Saraiva, and J. Visser. From spreadsheets to relational databases and back. In PEPM, 2009. extraction (e.g., extracting a substring or a sequence of substrings from a text tag, as in Fig. 2). Furthermore, we can leverage advances [7] K. Fisher and D. Walker. The pads project: an overview. In ICDT, in wrapper induction research as part of the FlashExtract general 2011. framework to support much more sophisticated extraction tasks. [8] K. Fisher, D. Walker, K. Q. Zhu, and P. White. From dirt to shovels: fully automatic tool generation from ad hoc data. In POPL, 2008. Data Extraction from Spreadsheets Cunha et al. [6] detect func- [9] C. Frenz, editor. Pro Perl Parsing. APress, 2005. tional dependencies in spreadsheet data in order to automatically [10] S. Gulwani. Automating string processing in spreadsheets using input- derive even the data schema. However, their technique is not ef- output examples. In POPL, 2011. fective over spreadsheets with hierarchical data. Abraham et al. identify spreadsheet headers automatically [2] and use that to ex- [11] S. Gulwani. Synthesis from examples: Interaction models and algo- rithms. In SYNASC, 2012. tract relational data. In contrast, FlashExtract extracts (data from) cells based on the properties of the surrounding cells. This allows [12] S. Gulwani, W. R. Harris, and R. Singh. Spreadsheet data manipulation FlashExtract to deal with spreadsheets with no headers. Furthermore, using examples. Commun. ACM, 55(8), 2012. instead of inferring the whole schema at once, FlashExtract allows [13] W. R. Harris and S. Gulwani. Spreadsheet table transformations from users to work in an interactive manner. Users may focus only on examples. In PLDI, 2011. cells of interest–this enables robustness on complex spreadsheets. [14] C.-N. Hsu and M.-T. Dung. Generating finite-state transducers for OpenRefine [1] and Wrangler [16] help users clean and transform semi-structured data extraction from the web. Inf. Syst., 23(9), 1998. their spreadsheet data into relational form. While OpenRefine [15] M. F. Ii and G. Rothermel. The euses spreadsheet corpus: A shared typically requires users to program, Wrangler automatically infers resource for supporting experimentation with spreadsheet dependability likely transformation rules and presents them in natural language. mechanisms. In Workshop on End-User Software Engineering, 2005. However, these tools are limited in their extraction capabilities over [16] S. Kandel, A. Paepcke, J. Hellerstein, and J. Heer. Wrangler: interactive spreadsheets with hierarchical data. visual specification of data transformation scripts. In CHI, 2011. [17] N. Kushmerick, D. S. Weld, and R. B. Doorenbos. Wrapper induction 8. Conclusion and Future Work for information extraction. In IJCAI (1), 1997. Various common document types such as text files, spreadsheets, [18] R. C. Miller. Lightweight Structure in Text. PhD Dissertation, Carnegie Mellon University, 2002. and webpages allow users to be creative in using the underlying rich layout capabilities to store multi-dimensional and hierarchical data [19] I. Muslea, S. Minton, and C. A. Knoblock. A hierarchical approach to in a two-dimensional layout. Existing data extraction solutions are wrapper induction. In Agents, 1999. domain-specific and require programming skills. We formalize the [20] E. Oro, M. Ruffolo, and S. Staab. Sxpath: Extending xpath towards problem of data extraction in a document independent manner and spatial querying on web documents. Proc. VLDB Endow., 4(2), 2010. present an end-user friendly example-based interaction model. [21] R. Singh and S. Gulwani. Learning semantic string transformations Our synthesis approach advances the state of the art in example from examples. PVLDB, 5(8), 2012. based program synthesis. Instead of designing a DSL and a custom [22] R. Singh and S. Gulwani. Synthesizing number transformations from synthesis algorithm, we define a core algebra of operators and input-output examples. In CAV, 2012. associate a compositional synthesis strategy with each of them. This [23] Q. Xi and D. Walker. A context-free markup language for semi- provides a free synthesis algorithm for any DSL that is constructed structured text. In PLDI, pages 221–232, 2010. using those operators. We give three examples of DSLs for extracting [24] K. Yessenov, S. Tulsiani, A. K. Menon, R. C. Miller, S. Gulwani, B. W. data from different document types, namely text files, webpages, Lampson, and A. Kalai. A colorful approach to text processing by and spreadsheets. The respective synthesis algorithms are able to example. In UIST, 2013. extract the intended data using very few examples and in real time.