GYQVU7MQYQQX2HYHQIHZ3LF7GRY4MNBNIVJUBXSDLURXVCAK7GRQC
% This is a document written to introduce students in MATH 2300-04 at FSU to LaTeX and Overleaf. Any other students are free to use this as well.
% All of this stuff with '%' in front is a comment and ignored by the compiler.
%
% The lines before the "\begin{document}" line is called the preamble.
% This is where you load particular packages you need.
% Until you are more experienced, or the program says you are missing packages, it is safe to ignore it.
%
%----------------------------------
\documentclass[12pt]{scrartcl}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage[margin=1in]{geometry}% Change the margins here if you wish.
\setlength{\parindent}{0pt} % This is the set the indent length for new paragraphs, change if you want.
\setlength{\parskip}{5pt} % This sets the distance between paragraphs, which will be used anytime you have a blank line in your LaTeX code.
\pagenumbering{gobble}% This means the page will not be numbered. You can comment it out if you like page numbers.
\usepackage{cutwin}
%These packages allow the most of the common "mathly things"
\usepackage{amsmath,amsthm,amssymb}
\usepackage{mathtools}
\usepackage{tikz}
\usepackage{tikz-qtree}
\usepackage{forest}
\usetikzlibrary{arrows}
\newenvironment{spreadTrees}{%
\expandafter\def\expandafter\pgfpicture\expandafter{\expandafter\hfill\pgfpicture}%
\center}{\hfill\null\endcenter}
\tikzset{
treenode/.style = {align=center, inner sep=0pt, text centered,
font=\sffamily},
every tree node/.style={align=center, anchor=north}
}
\usepackage{libertinus, libertinust1math}
\newtheorem{remark}{Remark}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{definition}{Definition}
%This package allows you to add graphs or any other images.
\usepackage{graphicx}
%These are the packages I usually use and needed for this document. There are bajillions of others to do nearly antyhing you want.
\usepackage{color}
\usepackage{enumerate}
\setcounter{secnumdepth}{-99}%\partnumdepth}
\newcommand\restr[2]{{% we make the whole thing an ordinary symbol
\left.\kern-\nulldelimiterspace % automatically resize the bar with \right
#1 % the function
\vphantom{\big|} % pretend it's a little taller at normal size
\right|_{#2} % this is the delimiter
}}
\makeatletter
\renewcommand*\section{\@startsection{section}{1}{\z@}%
{-0.5ex \@plus -1ex \@minus -.2ex}%
{0.5ex \@plus.2ex}%
{\raggedsection\large\bfseries\scshape\nobreak\centering}%
}
\renewcommand*\subsection{\@startsection{subsection}{2}{\z@}%
{-0.25ex\@plus -1ex \@minus -.2ex}%
{0.25ex \@plus .2ex}%
{\raggedsection\normalsize\bfseries\scshape\nobreak}%
}
\makeatother
%------------------------------------------
% The stuff you want to edit starts here.
%------------------------------------------
% \newcommand{\cupdot}{\mathbin{\mathaccent\cdot\cup}}
\newcommand{\CC}{\ensuremath{\mathcal{P}(A)}}
\newcommand{\mc}[1]{\ensuremath{\mathcal{#1}}}
\newcommand{\cpl}[1]{\ensuremath{#1^\mathsf{c}}}
\newcommand{\tto}{\ensuremath{\supset\!\to}}
\renewcommand{\restriction}{\mathord{\upharpoonright}}
\begin{document}
\section{Incompletely specified Muller conditions}
In the following we assume $C = \{c_1, c_2, \dotsc, c_k\}$ be a non-empty, finite set of labels. Further we use an alternative definition of Muller conditions as functions mapping subsets of $C$ to a symbol from $\Theta = \{+,-\}$. Specifically this means that a function $\varsigma : \CC \to \Theta$ is equivalent to the Muller condition $\mc{F} = (F_0, F_1)$ where $F_0 = \{X \subseteq C : \varsigma(X) = +\}$ and $F_1 = \{X \subseteq C: \varsigma(X) = -\}$. As it is clear that both definitions are equivalent and can be readily transformed into each other, we will use them interchangingly wherever it aids in simplifying notation.
A \emph{Hole condition} now drops the totality requirement on $\varsigma$ and is instead given by a partial function $\sigma : \CC \tto \Theta$, which allows for \emph{holes} in the pre-image. This means that for some sets $X \subseteq C$ we have that $\sigma(X)$ is undefined, representing dont-care infinity sets, which can be either accepting or rejecting. In the style of associating $F_0$ and $F_1$ with those sets that produce $+$ and $-$ when mapped using $\varsigma$, we do the same for Hole conditions and set $F_\bot$ to be the set of all $X \subseteq C$ for which $\sigma(X)$ is undefined. Later we will abuse notation to allow us to write for example $\sigma(X) = + \neq \sigma(Y)$ to denote that $\sigma(Y) \in \{-, \bot\}$ where $\sigma(Y) = \bot$ denotes that $\sigma(Y)$ is undefined.
What we now want to investigate is whether it is always possible to extend a Hole condition $\sigma$ to a permissible Muller condition $\varsigma$ such that $\varsigma(X) = \sigma(X)$ for all $X$ that are in the domain of definition of $\sigma$. Of special interest in this investigation are the \emph{union-closed} Muller/Hole conditions, which satisfy
\[\mbox{for } i \in \{0, 1\} \mbox{ and all } X,Y \in F_i \mbox{ we have } \sigma(X) = \sigma(Y) = \sigma(X \cup Y).\]
It is well known that the union-closed Muller conditions coincide with Parity conditions and it is easily possible to define a colouring as we will show later.
One key instrument in analyzing a Muller condition is the so-called \emph{Zielonka Tree} $\mc{Z}(\sigma)$ which can also be defined for a Hole condition $\sigma$. For this definition we will make use the restriction of a (partial) function $f : A \to B$ to some set $A' \subseteq A$ which will be denoted as $f\restriction_{A'}$. Let $C_1, \dotsc, C_n$ be the maximal sets in $\{X \subseteq C : \sigma(X) \neq \sigma(C)\}$ then the Zielonka tree $\mc{Z}_\sigma$ is inductively defined to have the root $(C, \sigma(C))$ to which the subtrees $\mc{Z}(\sigma\restriction_{C_1}), \dotsc,\mc{Z}(\sigma\restriction_{C_n})$ are attached. Note that there cannot exist $i \neq j$ with $\sigma(C_i) = \sigma(C_j) \neq \bot$.
For two Hole conditions $\sigma, \sigma'$ that share both domain and codomain we say that $\sigma'$ \emph{extends} $\sigma$, denoted as $\sigma' \triangleright \sigma$, if:
\[
\sigma(A) = x \neq \bot \implies \sigma'(A) = x
\]
in other words the two coincide on all sets for which $\sigma$ is defined. While it is possible to prove that for any union-closed Hole condition $\sigma$ a permissible union-closed Muller condition $\varsigma \triangleright \sigma$ exists, through using an argument based on contraposition, we want forego this step and instead define a resolution strategy that achieves this. One important caveat to the definition of such a strategy is that we would like it to avoid the possible exponential blowup in the translation. To this end we use the \emph{top-down} approach which starts at the root $C$ of a Zielonka Tree $\mc{Z}(\sigma)$ and sets for all children $X$ of $C$ with $\sigma(X) = \bot$ that $\varsigma(X) = \varsigma(C)$. We will now provide an examle of this strategy to illustrate how it works based on the Hole condition
\[\sigma : \mc{P}(\{c_1,c_2,c_3,c_4\}) \to \Theta \mbox{ with:}\]
\[\sigma(X) = \begin{cases}
+\ \mbox{ if } X \in \{\{c_1,c_2,c_3,c_4\},\{c_1,c_2\},\{c_1,c_4\},\{c_2,c_4\},\{c_3\}\} \\
-\ \mbox{ if } X \in \{\{c_1,c_2,c_3\},\{c_1,c_3\},\{c_1\}\} \\
\bot\ \mbox{ otherwise}
\end{cases}\]
\begin{center}
\begin{tikzpicture}[->, node distance = 2.5cm]
\node (A00) {$(\{c_1,c_2,c_3,c_4\},+)$};
\node[below of=A00, red, yshift=0.7cm] (A11) {$(\{c_1,c_3,c_4\}, \bot)$};
\node[left of=A11,] (A10) {$(\{c_1,c_2,c_3\},-)$};
\node[right of=A11] (A12) {$(\{c_2,c_3,c_4\}, \bot)$};
\node[below of=A11, yshift=.7cm] (A22) {$(\{c_1,c_3\},-)$};
\node[left of=A22] (A21) {$(\{c_2,c_3\}, \bot)$};
\node[left of=A21] (A20) {$(\{c_1,c_2\}, +)$};
\node[right of=A22] (A23) {$(\{c_1,4c_\}, +)$};
% \node[right of=A23] (A24) {$(\{3,4\}, \bot)$};
\node[right of=A23] (A25) {$(\{c_2,c_4\}, +)$};
\node[below of=A23,yshift=.7cm] (A32) {$(\{c_3\}, +)$};
\node[left of=A32] (A31) {$(\{c_2\},\bot)$};
\node[left of=A31] (A30) {$(\{c_1\}, -)$};
\node[right of=A32] (A33) {$(\{c_4\}, \bot)$};
\draw[->] (A00) -- (A11);
\draw[->] (A00) -- (A10);
\draw[->] (A00) -- (A12);
\draw[->] (A10) -- (A20);
\draw[->] (A10) -- (A21);
\draw[->] (A11) -- (A22);
\draw[->] (A11) -- (A23);
% \draw[->] (A11) -- (A24);
% \draw[->] (A12) -- (A24);
\draw[->] (A12) -- (A25);
\draw[->] (A20) -- (A30);
\draw[->] (A20) -- (A31);
\draw[->] (A21) -- (A32);
\draw[->] (A22) -- (A32);
\draw[->] (A23) -- (A30);
\draw[->] (A23) -- (A33);
\draw[->] (A25) -- (A31);
\draw[->] (A25) -- (A33);
\end{tikzpicture}
\end{center}
\vspace{4mm}
\begin{minipage}{0.57\textwidth}
\begin{tikzpicture}[->, node distance = 2.4cm]
\node (A00) {$(\{c_1,c_2,c_3,c_4\},+)$};
\node[below left of=A00] (A10) {$(\{c_1,c_2,c_3\},-)$};
\node[below right of=A00,red] (A12) {$(\{c_2,c_3,c_4\}, \bot)$};
\node[below of=A11, yshift=.7cm] (A21) {$(\{c_2,c_3\}, \bot)$};
\node[left of=A21] (A20) {$(\{c_1,c_2\}, +)$};
\node[right of=A22] (A25) {$(\{c_2,c_4\}, +)$};
\node[below of=A23,yshift=.7cm] (A32) {$(\{c_3\}, +)$};
\node[left of=A32] (A31) {$(\{c_2\},\bot)$};
\node[left of=A31] (A30) {$(\{c_1\}, -)$};
\node[right of=A32] (A33) {$(\{c_4\}, \bot)$};
\draw[->] (A00) -- (A10);
\draw[->] (A00) -- (A12);
\draw[->] (A10) -- (A20);
\draw[->] (A10) -- (A21);
\draw[->] (A12) -- (A25);
\draw[->] (A20) -- (A30);
\draw[->] (A20) -- (A31);
\draw[->] (A21) -- (A32);
\draw[->] (A23) -- (A31);
\draw[->] (A23) -- (A33);
\end{tikzpicture}
\end{minipage}
\begin{minipage}{0.42\textwidth}
\begin{tikzpicture}[->, node distance = 2.4cm]
\node (A00) {$(\{c_1,c_2,c_3,c_4\},+)$};
\node[below of=A00, yshift=.7cm] (A10) {$(\{c_1,c_2,c_3\},-)$};
\node[below of=A11, yshift=.7cm, red] (A21) {$(\{c_2,c_3\}, \bot)$};
\node[left of=A21] (A20) {$(\{c_1,c_2\}, +)$};
\node[below of=A23,yshift=.7cm] (A32) {$(\{c_3\}, +)$};
\node[left of=A32] (A31) {$(\{c_2\},\bot)$};
\node[left of=A31] (A30) {$(\{c_1\}, -)$};
\draw[->] (A00) -- (A10);
\draw[->] (A10) -- (A20);
\draw[->] (A10) -- (A21);
\draw[->] (A20) -- (A30);
\draw[->] (A20) -- (A31);
\draw[->] (A21) -- (A32);
\end{tikzpicture}
\end{minipage}
\vspace{12mm}\\
\begin{minipage}{0.45\textwidth}
\begin{tikzpicture}[->, node distance = 2.5cm]
\node (A00) {$(\{c_1,c_2,c_3,c_4\},+)$};
\node[below of=A00, yshift=.7cm] (A10) {$(\{c_1,c_2,c_3\},-)$};
\node[below of=A11, yshift=.7cm] (A20) {$(\{c_1,c_2\}, +)$};
\node[below of=A20,yshift=.7cm] (A31) {$(\{c_1\},-)$};
\node[right of=A31, red] (A30) {$(\{c_2\}, \bot)$};
\draw[->] (A00) -- (A10);
\draw[->] (A10) -- (A20);
\draw[->] (A20) -- (A30);
\draw[->] (A20) -- (A31);
\end{tikzpicture}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\begin{tikzpicture}[->, node distance = 2.5cm]
\node (A00) {$(\{c_1,c_2,c_3,c_4\},+)$};
\node[below of=A00, yshift=.7cm] (A10) {$(\{c_1,c_2,c_3\},-)$};
\node[below of=A11, yshift=.7cm] (A20) {$(\{c_1,c_2\}, +)$};
\node[below of=A20,yshift=.7cm] (A31) {$(\{c_1\},-)$};
\draw[->] (A00) -- (A10);
\draw[->] (A10) -- (A20);
\draw[->] (A20) -- (A31);
\end{tikzpicture}
\end{minipage}
The nodes marked in red illustrate for each depicted DAG the node that is determined in the current step. For example in the first step the set $\{c_1,c_3,c_4\}$ is determined to be $+$, just as its parent $\{c_1,c_2,c_3,c_4\}$. This in turn leads to the disappearance of both $(\{c_1,c_3\},-)$ and $(\{c_1,c_4\},+)$ from the tree as they are not maximal different-typed subsets of any set in the remaining tree. Since the Hole condition $\sigma$ we started with was union-closed, the Zielonka tree we obtain after applying the top-down determination is a path, meaning the resulting Muller condition $\varsigma \triangleright \sigma$ is also union-closed. We can thus define the parity condition
\[c: C \to \{0,1,2,3\} \mbox{ with } c_1 \mapsto 3, c_2 \mapsto 2, c_3\mapsto 1, c_4 \mapsto 0.\]
During the following proof we will assume that a Hole condition is given as a partial function $\sigma : C \to \Theta$ for a finite non-empty set $C = \{c_0, c_1, \dotsc, c_{n-1}\}$. We will further make use of the explicit form of the condition, denoted as $\mc{H}_\sigma = (F_0^\sigma, F_1^\sigma, F_\bot^\sigma)$ where $F_0^\sigma = \sigma^{-1}(+), F_1^\sigma = \sigma^{-1}(-)$ and $F_\bot^\sigma = C \setminus (F_0 \cup F_1)$.
\begin{lemma}
A top-down determined Zielonka tree of a union-closed Hole condition $\sigma$ over colors $C = \{0, 1, \dotsc, n-1\}$ with $\sigma(C) \in \{+, -\}$ is a path.
\end{lemma}
\begin{proof}
We prove this claim by induction on the number of colors $|C|$.\\
For $|C| = 1$ i.e. $C = \{0\}$ this claim trivially holds as $C \in F_0^\sigma \cup F_1^\sigma$ implies $F_?^\sigma = \emptyset$ and hence $\mc{H}_\sigma$ is already equivalent to a union-closed Muller condition. The Zielonka tree $\mc{Z}(\sigma)$ consists of only one node, labelled $(x, C)$ with $\sigma(C) = x$.\\
We assume that the claim holds for the top-down determined Zielonka tree of any union-closed Hole condition over a set of $n$ colors and consider a Hole condition $\sigma : C \to \Theta$ over $C = \{0, 1, \dotsc, n\}$ i.e. $|C| = n+1$. Since $C \in F_0^\sigma \cup F_1^\sigma$ we can assume wlog that $C \in F_0^\sigma$ and hence $(+, C)$ is the root of $\mc{Z}(\sigma)$ as we can simply swap $F_0$ and $F_1$ otherwise.
\begin{center}
\begin{tikzpicture}[-, >=stealth', level/.style={sibling distance = 2.5cm/#1, level distance = 1.5cm},
emph/.style={edge from parent/.style={dashed,draw}},
notavail/.style={edge from parent/.style={}},
norm/.style={edge from parent/.style={solid,black,thin,draw}}]
\node {$(+, C)$}
child[emph] {
node {$(-, C_0)$}
}
child[norm]{
node {$(x_1, C_1)$}
}
child {
node {$(x_2, C_2)$}
}
child[notavail] {
node {$\dotsc$}
}
child[norm] {
node {$(x_k, C_k)$}
}
;
\end{tikzpicture}
\end{center}
Now let $\mathbb{C} = \{C_0, \dotsc, C_k\}$ be the maximal sets in $\{X \subseteq C: \sigma(X) \neq +\}$. Note that due to $\sigma$ being union-closed, we have that $|\mathbb{C} \cup F_1^\sigma| \leq 1$. We now reorder these $C_i$ such that $\sigma(C_i) = \bot$ for $0 < i \leq k$ and hence if $C_0$ exists then $\sigma(C_0) = -$. To $(+, C)$ we now attach the subtrees
\begin{itemize}
\item $\mc{Z}(\sigma \restriction_{\mc{P}(C_0)})$ if $C_0$ exists
\item $\mc{Z}(\sigma \restriction_{\mc{P}(C_i)}[C_i \mapsto +])$ for $0 < i \leq k$ where $f[X \mapsto y](A) = \begin{cases}
y \mbox { if } A = X \\
f(A) \mbox{ otherwise}.
\end{cases}$
\end{itemize}
Since we clearly have that $|C_i| < |C|$, the induction hypothesis is applicable to each of these subtrees, yielding $P_1, \dotsc, P_k$ a set of Zielonka paths. Note, that the fact that these are paths in turn guarantees that each $\sigma \restriction_{\mc{P}(C_i)}$ for $i \leq k$ is again union-closed.\\
Now assume that the whole construct does not collapse into a single path, meaning there must exist two sets $(-, A)$ and $(-, B)$ with $\sigma(A \cup B) = +$ in some paths $P_i, P_j$. Clearly we must have $i \neq j$ since otherwise both $(-, A)$ and $(-, B)$ would lie in $P_i$, contradicting that $P_i$ is a path. Without loss of generality we assume that $\sigma(A)$ is not defined as at least one of the two sets has to lie in $F_\bot^\sigma$ since otherwise the assumption of union-closedness would be violated. Thus there exist $A' \in F_-$ with $A \subseteq A' \subseteq C$ and $B' \in F_-^\sigma$ with $B \subseteq B' \subseteq C$. This in turn means that $Z = A' \cup B'$ exists with $Z \in F_-^\sigma$ (since $F_-^\sigma$ is closed under union) as well as $Z \subseteq C$.\\
Now we know that either $Z$ itself is a maximal set from $C_0, \dotsc, C_k$ in which case $Z = C_l$ or we have $Z \subseteq C_l$. But then this means that clearly $A, B \in \mc{P}(C_l)$, which means that $A$ and $B$ co-occur in the same subtree $T_l$ of $(+, C)$. Our induction hypothesis, however, guarantees that $T_l$ collapses into a path. This implies that either $A \subset B$ or $B \subset A$ which overall guarantees $\sigma(A \cup B) = -$, which is a contradiction to our assumption, which concludes our proof.
\end{proof}
Now we turn to individual determination steps as they will be performed by the algorithm we will eventually implement. During each such step, the maximal subset $X \subseteq C$ with $\sigma(X) = \bot$ is determined to have the same type as its parent in the Zielonka Tree representation of $\sigma$, which can be done by starting at $C$ itself, determining all its children and continuing from the unique remaining child of $C$ in the same way and we will denote the Hole condition obtained after one such step as $\sigma'$. As a measure of complexity for a Hole condition we will from now on use number of alternations defined as:
\[
|\sigma| = \max \{\mathsf{ad}_\sigma(K) : K = X_n \subsetneq X_{n-1} \subsetneq \dotsc \subsetneq X_0 \mbox{ with } X_0 = C\} \mbox{ for } n = |C| - 1.
\]
where the alternation depth $\mathsf{ad}_\sigma$ of a chain $K$ is defined as the number of pairs $(i, j)$ with $i < j < n$ such that $\sigma(X_i) = 1 - \sigma(X_j)$ and $\sigma(X_k) = \bot$ for all $i < k < j$.
\begin{remark}
Let $\sigma$ be a union-closed Hole condition and $\sigma'$ be the result of one determination step then we have that $|\sigma| = |\sigma'|$.
\end{remark}
\begin{proof}
Clearly we cannot have $|\sigma| > |\sigma'|$ as alternations cannot be removed by a determination step. Indeed if $K = X_n \subsetneq X_{n-1} \subsetneq \dotsc \subsetneq X_0$ is the chain with the maximal number of alternations in $\sigma$ then for each $0 \leq i < n$ with $\sigma(X_i) = \bot$ it holds that either still $\sigma'(X_i) = \bot$ or $\sigma'(X_i) = \sigma(X_{i+1}) = \sigma'(X_{i+1})$, which in turn does not increase the number of alternations.\\
Assume now that $|\sigma| < |\sigma'|$ and consider the chain $K = X_n \subsetneq X_{n-1} \subsetneq \dots \subsetneq X_0$ with the highest number of alternations in $\sigma'$. The sequence of assignments $\sigma(K) = \sigma(X_n), \sigma(X_{n-1}), \dotsc$ can only differ in one position from $\sigma'(K) = \sigma'(X_n), \dotsc$. We call this position $k$ and note that $\sigma(X_l) = \sigma'(X_l)$ for all $l < k$, as the determination occurs top-down, meaning all sets larger than the maximal previously undetermined set $X_k$ are left as-is. Now let $j > k$ be the smallest index such that $\sigma(X_j) = \sigma'(X_j) \neq \bot$. If $\sigma(X_j) = \sigma(X_{k+1})$ we have $\sigma'(X_j) = \sigma'(X_k) = \sigma'(X_{k+1})$ and consequently $\mathsf{ad}_\sigma(K) = \mathsf{ad}_{\sigma'}(K)$. Otherwise $\sigma(X_j) = 1 - \sigma(X_{k+1})$ but then by definition $\sigma'(X_k) = \sigma'(X_{k+1})$ and further we have that $\mathsf{ad}_\sigma(X_j \subseteq \dotsc \subseteq X_{k + 1}) = \mathsf{ad}_{\sigma'}(X_j \subseteq \dotsc \subseteq X_k)$ since $\sigma(X_l) = \sigma'(X_l) = \bot$ for all $j > l > k$. Combining these two insights with the fact that $\sigma(K)$ and $\sigma'(K)$ coincide on the assignments for all $l > j$ and $k < l$, we can conclude that indeed $|\sigma| = |\sigma'|$.
\end{proof}
So now we know that determining $\bot$-sets top-down leads to a completely specified Muller condition $\varsigma \triangleright \sigma$, which is union-closed and induces a Zielonka path of length $|\sigma|$. This means that there exists a parity condition that is equivalent to $\sigma$ which makes use of $|\sigma|$ distinct priorities. Furthermore we can observe that after determination, the successor to the root, say $(x, C)$ is the unique maximal subset $C' \subseteq C$ with $\sigma(C') = 1 - \sigma(C)$. From this point on we can view $(1 - x, C')$ as the root of a Zielonka tree again and consequently find its successor as the maximal subset $C'' \subseteq C'$ with $\sigma(C'') = 1 - \sigma(C') = \sigma(C)$. This procedure can be continued until we reach a set $A$ for which no sets $A' \subseteq A$ with $\sigma(A') = 1 - \sigma(A)$ exist.
To compute a parity condition $c : C \to \{0, 1, \dotsc, |\sigma| - 1\}$, however, we do not have to compute the assignments to all undetermined sets, instead we start off by computing a chain of length $|\sigma| + 1$
\[K = X_0 \supsetneq \dotsc \supsetneq X_{|\sigma| - 1} \mbox{ where } X_0 = C\]
such that $X_{i+1}$ is the unique maximal subset of $X_i$ with $\sigma(X_i) = 1 - \sigma(X_{i+1})$ for $i < |\sigma| - 1$. If we additionally set $X_{|\sigma|} := \emptyset$ this allows us to define the priority function $c$ based on the colors that dissapear in each step along $K$:
\[c(x) = i \mbox{ for the unique } i \mbox{ such that } x \in X_i \setminus X_{i+1}.\]
Finally we want to illustrate how it is possible to test whether a given Hole condition $\sigma$ is union-closed without explicitly checking that the union $X \cup Y$ for each pair $(X, Y) \subseteq (F^\sigma_0 \times F^\sigma_0) \cup (F^\sigma_1 \times F^\sigma_1)$ belongs to the correct set, as this could incur exponential complexity.
\begin{remark}
A Hole condition $\sigma : C \tto \Theta$ is union-closed if and only if for all $Y \subseteq C$ it holds
\begin{itemize}
\item if $\sigma(Y) = a$ is defined then
\[
Y \neq \bigcup \{X \subseteq Y : \sigma(X) = 1 - \sigma(Y)\}
\]
\item if $\sigma(Y) = \bot$ is undefined then we have that
\[
\bigcup (\mathop{\sigma^{-1}}(+) \cap Y) \neq \bigcup (\mathop{\sigma^{-1}}(-) \cap Y)
\]
\end{itemize}
\end{remark}
\begin{proof}
We assume $Y$ to be the smallest set for which $X_1,\dotsc,X_n$ with $\sigma(Y) = 1 - \sigma(X_i)$ and $Y = \bigcup X_i$ exist. Then clearly due to the minimality of $Y$ there needs to exist a partition \[
I \cupdot I' = \{1,\dotsc,n\} \mbox{ such that } Y = \overbrace{(\bigcup_{i \in I}X_i)}^{A} \cup \overbrace{(\bigcup_{i \in I'} X_i)}^{B} \mbox{ with } \sigma(A) = \sigma(B) = 1 - \sigma(Y).\]
We can then conclude from $A \cup B = Y$ that $\sigma$ is indeed not union-closed.
\end{proof}
The previous remark already gives rise to a criterion under which we can \emph{exclude} that a given $\sigma$ is union-closed. But it can only detect violations and therefore the absence of a set $Y$ with a decomposition as above does not guarantee that $\sigma$ is actually union-closed. Rather we merely exclude the existence of a violation and we will call a Hole condition that passes this test \emph{consistent}.
\begin{remark}
Let $\sigma : C \tto \Theta$ be a consistent Hole condition and $u \subseteq C$ be an undetermined set with $\sigma(u) = \bot$. There exists a consistent extension $\sigma' \triangleright \sigma$ with $\sigma'(u) \neq \bot$.
\end{remark}
\begin{proof}
Assume that there exist $n \subseteq C$ and $p \subseteq C$ such that $\sigma(u \cup p) = -$ and $\sigma(u \cup n) = +$, which would mean that setting either $\sigma'(u) = +$ or $\sigma'(u) = -$ introduces an inconsistency. Now consider $x = u \cup p \cup n$
\end{proof}
\end{document}
#!/usr/bin/bash
cargo test
cd samples/inf_a
RUST_BACKTRACE=1 cargo run -- -s s.csv > out.hoa
cat target.hoa | autcross 'echo "%H" && cat out.hoa > %O' --language-preserved
cd ../fin_b
RUST_BACKTRACE=1 cargo run -- -s s.csv > out.hoa
cat target.hoa | autcross 'echo "%H" && cat out.hoa > %O' --language-preserved
cd ../inf_doublea
RUST_BACKTRACE=1 cargo run -- -s s.csv > out.hoa
cat target.hoa | autcross 'echo "%H" && cat out.hoa > %O' --language-preserved
cd ..
cd ..
use crate::automaton::Index;
use itertools::Itertools;
use log::info;
use std::collections::{BTreeSet, HashSet};
use std::{cmp::Ordering, collections::hash_map::RandomState, ops::Not};
use std::{
collections::HashMap,
ops::{Add, Rem},
};
use std::{
convert::TryInto,
fmt::{Display, Formatter},
};
/// Represents the kind of loop, of which there are three. Infinite recurrence of
/// an accepting/rejecting loop must lead to an accepting/rejecting run. Finally we
/// also allow for undetermined loops, which are not enforced to be accepting or rejecting
/// by the sample. These can be filled arbitrarily without breaking consistency
/// with the input sample.
#[derive(Debug, Copy, Clone, Hash, Eq, PartialEq)]
pub enum LoopType {
/// This must be an accepting loop.
Accepting,
/// It should be rejecting.
Rejecting,
/// The loop can take any of the other two variants without making the resulting
/// automaton inconsistent.
Undetermined,
}
/// Combines two LoopTypes, we know that unions of accepting loops must be accepting,
/// unions of rejecting ones must be rejecting and unions of mixed/undetermined loops
/// are again undetermined.
impl Add for LoopType {
type Output = LoopType;
fn add(self, rhs: Self) -> Self::Output {
match (self, rhs) {
(LoopType::Accepting, LoopType::Accepting) => LoopType::Accepting,
(LoopType::Rejecting, LoopType::Rejecting) => LoopType::Rejecting,
_ => LoopType::Undetermined,
}
}
}
impl Display for LoopType {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
LoopType::Accepting => write!(f, "+"),
LoopType::Rejecting => write!(f, "-"),
LoopType::Undetermined => write!(f, "?"),
}
}
}
impl Not for LoopType {
type Output = LoopType;
fn not(self) -> Self::Output {
match self {
LoopType::Accepting => LoopType::Rejecting,
LoopType::Rejecting => LoopType::Accepting,
LoopType::Undetermined => panic!("this is not supported as it is not really defined!"),
}
}
}
/// A loop consists of a a set of state-symbol pairs and a classification.
/// Loops can be accepting, rejecting or have no classification at all. The latter
/// case can usually only occur if the loop is not induced by an ultimately periodic
/// word from the input sample set.
/// We fully determine a Loop by the set of edges it is made up of, their multiplicity
/// is not relevant as acceptance is based on the set of edges occuring infinitely often,
/// which in turn depends on whether a loop is traversed infinitely often. Whether each
/// traversal uses a transition once or twice clearly plays no role in determining the set of
/// transitions that are used infinitely often.
#[derive(Debug, Clone, Hash, Eq, PartialEq)]
pub struct Loop {
pub(crate) edges: BTreeSet<(Index, char, Index)>,
pub(crate) kind: LoopType,
}
impl Loop {
pub fn coincides(&self, other: &Loop) -> bool {
self.edges == other.edges
}
/// Creates a Loop from an Iterator over its transitions/edges and a given LoopType,
/// which marks whether the loop is accepting, rejecting or undetermined.
pub fn new<I: Iterator<Item = (Index, char, Index)>>(edges: I, kind: LoopType) -> Loop {
Loop {
edges: edges.collect(),
kind,
}
}
/// Returns the length of the loop, i.e. the number of transitions it is made up of.
pub fn len(&self) -> usize {
self.edges.len()
}
/// Calculates _all_ words on which the full loop can be traversed. Those are essentially all
/// the same sequence of characters, just unravelled/rotated to a different position.
/// If a loop can be traversed with abcd, then depending on which state we begin in, it is
/// also traversable with bcda, cdba as well as dabc.
pub fn words(&self) -> HashSet<String> {
unimplemented!();
}
/// Returns the set of all states the loop passes through.
pub fn states(&self) -> HashSet<Index> {
self.edges.iter().map(|(q, _c, _p)| *q).collect()
}
/// Returns true if and only if the loop passes through the given state.
pub fn intersects(&self, state: Index) -> bool {
self.edges.iter().any(|(q, _c, _p)| *q == state)
}
/// For a given state this computes a set of strings, on which the full loop can be
/// traversed. If the loop does not pass through the given state, None is returned.
pub fn recurs_from(&self, state: Index) -> Option<HashSet<String>> {
if !self.intersects(state) {
return None;
}
match self.intersects(state) {
// depending on whether or not this state actually intersects the loop
true => Some(
self.edges
// we enumerate the edges to detect the position this intersection occurs in
.iter()
.enumerate()
// and find out the starting points of all intersections
.filter(|(_n, (q, _c, _p))| *q == state)
.map(|(n, _)| {
// for which we then calculate the associated recurring word by cycling the char sequence
// making up the loop, skipping to the specific position the intersection occurs
// and fetching the following *looplength*-symbols
self.edges
.iter()
.map(|(_q, c, _p)| *c)
.cycle()
.skip(n)
.take(self.len())
.collect()
})
.collect(),
),
false => None,
}
}
pub fn determine(&self, to: LoopType) -> Loop {
match self.kind {
LoopType::Undetermined => Loop {
edges: self.edges.clone(),
kind: to,
},
_ => self.clone(),
}
}
pub fn undetermined_to_acc(&self) -> Loop {
match self.kind {
LoopType::Undetermined => Loop {
edges: self.edges.clone(),
kind: LoopType::Accepting,
},
_ => self.clone(),
}
}
}
/// Implement addition, l + m, for Loops l, m. The result is a new Loop consisting
/// of the union of the transitions making up l and m.
impl Add for Loop {
type Output = Loop;
fn add(self, rhs: Self) -> Self::Output {
match self.partial_cmp(&rhs) {
Some(Ordering::Less) => rhs,
Some(Ordering::Greater) => self,
_ => Loop {
edges: self
.edges
.into_iter()
.chain(rhs.edges.into_iter())
.collect(),
kind: self.kind + rhs.kind,
},
}
}
}
impl Add<&Loop> for &Loop {
type Output = Loop;
fn add(self, rhs: &Loop) -> Self::Output {
self.clone().add(rhs.clone())
}
}
impl Rem for Loop {
type Output = bool;
fn rem(self, rhs: Self) -> Self::Output {
!self
.states()
.intersection(&rhs.states())
.collect_vec()
.is_empty()
}
}
impl Rem<&Loop> for &Loop {
type Output = bool;
fn rem(self, rhs: &Loop) -> Self::Output {
!self
.states()
.intersection(&rhs.states())
.collect_vec()
.is_empty()
}
}
/// We define a partial ordering on the type Loop, here we say that
/// for loops l, m it holds that l < m if an only if the transitions that occur
/// in l are a subset of those occurring in m.
impl PartialOrd for Loop {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
if self.edges.is_subset(&other.edges) {
Some(Ordering::Less)
} else if self.edges.is_superset(&other.edges) {
Some(Ordering::Greater)
} else {
None
}
}
}
impl Display for Loop {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
assert!(!self.edges.is_empty());
write!(f, "{} : {{", self.kind)?;
for (q, c, p) in &self.edges {
write!(f, "({} -{}-> {})", q, c, p)?;
}
write!(f, "}}")
}
}
/// An inclusion graph consists of a set of Loops and an order on these loops.
///
/// Such a graph captures the inclusions between loops in the form of a DAG.
/// From this structure it is possible to determine whether a parity condition
/// exists. Further, such a condition can be efficiently computed.
///
/// To realize this, the DAG is first converted into a Zielonka tree. It is known
/// that a parity condition can be constructed iff this tree is a path:
/// \\[ \ell_1 \subset \ell_2 \subset \ell_3 \subset \dotsc \subset \ell_{n-1} \subset \ell_n \\]
/// A parity condition can now be computed by associating all transitions that occur
/// in loop l_n but not l_(n-1) with the most significant (which in our case due to
/// the min-even condition 0) parity. This is continued to the difference between
/// l_2 and l_1, which in turn is assigned the least significant parity.
///
/// Furthermore it is possible to read the maximal number alternation from the calculated
/// Zielonka path. This corresponds to the number of distinct parities needed.
pub struct InclusionGraph {
pub(crate) loops: HashSet<Loop>,
}
impl From<HashSet<Loop>> for InclusionGraph {
fn from(data: HashSet<Loop, RandomState>) -> Self {
InclusionGraph { loops: data }
}
}
impl Display for InclusionGraph {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
writeln!(
f,
"positive: [{}]",
self.loops
.iter()
.filter(|l| l.kind == LoopType::Accepting)
.map(|l| l.to_string())
.join(", ")
)?;
writeln!(
f,
"negative: [{}]",
self.loops
.iter()
.filter(|l| l.kind == LoopType::Rejecting)
.map(|l| l.to_string())
.join(", ")
)?;
writeln!(
f,
"indefinite: [{}]",
self.loops
.iter()
.filter(|l| l.kind == LoopType::Undetermined)
.map(|l| l.to_string())
.join(", ")
)
}
}
impl InclusionGraph {
/// Ensures that both parameters are actually part of the InclusionGraph and checks whether
/// superloop is an extension of subloop, meaning that all edges that occur in subloop must
/// also occur in superloop.
pub fn rel(&self, subloop: &Loop, superloop: &Loop) -> bool {
assert!(
self.contains(subloop),
"{} does not contain {}",
self,
subloop
);
assert!(
self.contains(superloop),
"{} does not contain {}",
self,
superloop
);
subloop < superloop
}
pub fn contains(&self, what: &Loop) -> bool {
self.loops.iter().any(|l| l.coincides(what))
}
fn is_union(&self, union: &Loop, left: &Loop, right: &Loop) -> bool {
assert!(
self.loops.contains(union) && self.loops.contains(left) && self.loops.contains(right)
);
union
.edges
.symmetric_difference(&left.edges)
.chain(union.edges.symmetric_difference(&right.edges))
.next()
.is_some()
}
/// Returns true if and only if superloop is a smallest superset of subloop, meaning no
/// loop lies between them from a set-inclusion perspective.
pub fn immediate(&self, subloop: &Loop, superloop: &Loop) -> bool {
self.rel(subloop, superloop)
&& !self
.loops
.iter()
.any(|l| l != subloop && l != superloop && subloop < l && l < superloop)
}
pub fn is_closed(&self) -> bool {
self.loops
.iter()
.cartesian_product(self.loops.iter())
.all(|(l1, l2)| {
let comb = l1 + l2;
if comb.kind == LoopType::Undetermined {
self.loops.iter().any(|ll| ll.coincides(&comb))
} else {
self.loops.contains(&comb)
}
})
}
/// Checks if the union of two determined (ie two accepting or two rejecting) loops is either
/// of the same type or undetermined. In the latter case such a conflict might be resolved
/// depending on the remaining loops. For now the check is aborted with a negative result
/// if a conflicting union is found, i.e. two accepting/rejecting loops that have as their
/// union a loop which has the exact opposite type.
pub fn union_closed(&self) -> bool {
for l1 in &self.loops {
if l1.kind != LoopType::Undetermined {
// we iterate over all loops that share a type which is not undetermined
// ie pairs of accepting and pairs of rejecting loops
for l2 in self.by_type(l1.kind) {
// if the union of both is of the opposite type we are not closed
// otherwise this can be remedied and we are good
let comb = l1 + l2;
if !self
.loops
.iter()
.all(|elem| elem.edges != comb.edges || elem.kind != !comb.kind)
{
return false;
}
}
}
}
true
}
pub fn maximum(&self) -> Option<&Loop> {
assert!(self.is_closed());
self.loops
.iter()
.find(|&l| !self.loops.iter().any(|ll| l != ll && l < ll))
}
pub fn minimum(&self) -> Option<&Loop> {
assert!(self.is_closed());
self.loops
.iter()
.find(|&l| !self.loops.iter().any(|ll| l != ll && ll < l))
}
pub fn by_type(&self, kind: LoopType) -> impl Iterator<Item = &Loop> + '_ {
self.loops.iter().filter(move |&l| l.kind == kind)
}
}
pub struct ZielonkaPath {
chain: Vec<Loop>,
}
impl ZielonkaPath {
pub fn len(&self) -> usize {
self.chain.len()
}
fn starting_priority(&self) -> usize {
match self
.chain
.iter()
.find(|&l| l.kind != LoopType::Undetermined)
{
Some(l) => match l.kind {
LoopType::Accepting => 0usize,
LoopType::Rejecting => 1usize,
LoopType::Undetermined => unreachable!("we excluded that with the call to find"),
},
None => panic!("only undetermined loops? should not be possible!"),
}
}
pub fn parities(&self) -> HashMap<(Index, char, Index), usize> {
let mut out = HashMap::new();
let bottom = Loop {
edges: BTreeSet::new(),
kind: LoopType::Undetermined,
};
let mut current_parity = self.starting_priority();
let mut it = self.chain.iter().peekable();
while let Some(cur) = it.next() {
let succ = match it.peek() {
Some(&l) => l,
None => &bottom,
};
// we need to flip acceptance only in these two cases
if (cur.kind == LoopType::Rejecting && current_parity % 2 == 0)
|| (cur.kind == LoopType::Accepting && current_parity % 2 == 1)
{
current_parity += 1;
}
let diff = cur.edges.difference(&succ.edges);
out.extend(diff.map(|&d| (d, current_parity)));
}
out
}
}
impl Display for ZielonkaPath {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}",
self.chain
.iter()
.map(|l| l.to_string())
.join("\n\t\t|son\n\t\tv\n")
)
}
}
impl TryInto<ZielonkaPath> for InclusionGraph {
type Error = &'static str;
fn try_into(self) -> Result<ZielonkaPath, Self::Error> {
info!("computing Zielonka Path for InclusionGraph:\n{}", self);
if !self.is_closed() {
info!("Dumping Inclusion Graph due to Error {}", self);
return Err("graph is not closed, so it cannot be turned into Zielonka path(s)");
}
// first we extract the inclusion-maximal element of the inclusion graph, this becomes
// the root of our Zielonka Path
let mut out = vec![self
.maximum()
.ok_or("could not determine inclusion maximum")?
// maybe we need to just ensure that the maximal loop is accepting if it was undetermined before
.determine(LoopType::Accepting)];
let all_neg = self
.by_type(LoopType::Undetermined)
.filter(|&l| self.by_type(LoopType::Rejecting).any(|ll| l < ll))
.chain(self.by_type(LoopType::Rejecting))
.filter(|&l| l < &out[0])
.collect_vec();
let all_pos = self
.by_type(LoopType::Undetermined)
.filter(|&l| self.by_type(LoopType::Rejecting).all(|ll| !(l < ll)))
.chain(self.by_type(LoopType::Accepting))
.filter(|&l| l < &out[0])
.collect_vec();
// ensure that no undetermined loops are left
assert!(self
.by_type(LoopType::Undetermined)
.all(|l| all_neg.contains(&l) || all_pos.contains(&l)));
loop {
let previous = out.last().unwrap();
let candidate = all_neg
.iter()
.chain(all_pos.iter())
.filter(|&l| self.rel(l, previous) && l.kind == !previous.kind)
.sorted_by(|a, b| a.len().cmp(&b.len()))
.rev()
.collect_vec();
// assert!(
// candidate.len() <= 1,
// "candidate was not <= 1: {}\n{}",
// candidate.iter().map(|l| format!("{}", l)).join(","),
// self
// );
if let Some(&&l) = candidate.first() {
out.push(l.clone());
} else {
break;
}
}
// out should now contain all inclusion-maximal loops with alternating acceptance types
let zp = ZielonkaPath { chain: out };
info!("successfully computed path:\n{}", zp);
Ok(zp)
}
}
#[cfg(test)]
mod tests {
use crate::automaton::DFA;
#[test]
fn loop_intersection_test() {
let mut aut = DFA::new(vec!['a', 'b'].into_iter().collect());
let q0 = aut.add(0);
let q1 = aut.add(1);
let q2 = aut.add(2);
let q3 = aut.add(3);
let q4 = aut.add(4);
let q5 = aut.add(5);
aut.set_initial(q0);
aut.map(&q0, 'a', &q1);
aut.map(&q1, 'a', &q2);
aut.map(&q2, 'a', &q3);
aut.map(&q3, 'a', &q4);
aut.map(&q3, 'b', &q4);
aut.map(&q4, 'a', &q5);
aut.map(&q5, 'a', &q2);
for l in aut.elementary_loops() {
println!("{}", l);
for rec in l.recurs_from(q2).unwrap() {
println!("\tallows rec {} from q2", rec);
}
}
for l in aut.close(aut.elementary_loops()) {
println!("found {}", l);
}
}
}
use itertools::Itertools;
use std::fmt::{Display, Formatter, Result};
use std::ops::{Sub, SubAssign};
use std::{cmp::max, ops::Shl};
#[allow(clippy::clippy::derive_hash_xor_eq)]
#[derive(Clone, Debug, Hash)]
pub struct Word {
pub(crate) base: String,
pub(crate) ext: String,
}
impl Word {
pub fn new(b: &str, e: &str) -> Word {
Word {
base: String::from(b),
ext: String::from(e),
}
.reduced()
}
pub fn head(&self) -> char {
self.chars().next().unwrap()
}
pub fn has_prefix(&self, pref: &str) -> bool {
self.chars()
.take(pref.len())
.zip(pref.chars())
.all(|(l, r)| l == r)
}
pub fn chars(&self) -> impl Iterator<Item = char> + '_ {
self.base.chars().chain(self.ext.chars().cycle())
}
pub fn bare(&self) -> String {
format!("{}{}", self.base, self.ext)
}
pub fn combined_len(&self) -> usize {
self.base.len() + self.ext.len()
}
pub fn reduced(&mut self) -> Self {
let dedup = |wrd: &String| {
for i in (1..(wrd.len() - 1)).rev() {
if wrd.len() % i == 0 {
let p = wrd.len() / i;
if wrd
.chars()
.zip(wrd.chars().take(p).cycle())
.all(|(x, y)| x == y)
{
return wrd.chars().take(p).collect();
}
}
}
wrd.clone()
};
let bl = self.base.len();
let mut v: Vec<char> = self.ext.chars().collect();
if self.base.is_empty() {
return self.clone();
}
if self.base.chars().last().unwrap() != *v.last().unwrap() {
return self.clone();
}
let mut it = self.base.chars().rev();
for i in (0..bl).rev() {
if self.base.chars().nth(i).unwrap() == *v.last().unwrap() {
it.next();
v.rotate_right(1);
} else {
return Word {
ext: dedup(&v.into_iter().collect()),
base: it.rev().collect(), //self.base.chars().take(bl - i).collect(),
};
}
}
Word {
base: String::new(),
ext: dedup(&v.into_iter().collect()),
}
}
}
impl Display for Word {
fn fmt(&self, f: &mut Formatter<'_>) -> Result {
write!(f, "{},{}", self.base, self.ext)
}
}
impl<'a> PartialEq for Word {
fn eq(&self, other: &Self) -> bool {
let cl = 2 * max(self.combined_len(), other.combined_len());
self.chars()
.zip(other.chars())
.take(cl)
.all(|(c1, c2)| c1 == c2)
}
}
impl Eq for Word {}
impl SubAssign<usize> for Word {
fn sub_assign(&mut self, rhs: usize) {
// remove as much of the prefix as possible:
let base_len = self.base.len();
self.base = self.base.chars().skip(rhs).collect();
// if we are not yet done, we start rotating the suffix
if rhs > base_len {
let diff = rhs - base_len;
let mut chrs = self.ext.chars().collect_vec();
chrs.rotate_left(diff % self.ext.len());
self.ext = chrs.into_iter().collect();
}
}
}
impl Sub<usize> for Word {
type Output = Word;
fn sub(self, rhs: usize) -> Self::Output {
let mut out = self;
out -= rhs;
out
}
}
impl Sub<usize> for &Word {
type Output = Word;
fn sub(self, rhs: usize) -> Self::Output {
let mut out = self.clone();
out -= rhs;
out
}
}
impl Shl<usize> for Word {
type Output = Word;
fn shl(self, rhs: usize) -> Self::Output {
let mut chrs = self.ext.chars().collect_vec();
let mut base = self.base.chars().collect_vec();
base.extend(chrs.iter().copied().cycle().take(rhs));
chrs.rotate_left(rhs);
Word {
base: base.into_iter().collect(),
ext: chrs.into_iter().collect(),
}
}
}
impl Shl<usize> for &Word {
type Output = Word;
fn shl(self, rhs: usize) -> Self::Output {
self.clone() << rhs
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn minus_test() {
let mut wrd = Word::new("abc", "ac");
println!("{}", wrd);
wrd -= 1;
assert_eq!(wrd, Word::new("bc", "ac"));
assert_eq!(Word::new("", "debc"), Word::new("a", "bcde") - 3);
assert_eq!(Word::new("abcd", "fg") - 2, Word::new("cd", "fg"));
}
#[test]
fn prefix_test() {
let wrd = Word::new("", "b");
assert!(!wrd.has_prefix("aaa"));
}
#[test]
fn test_reduction() {
assert_eq!(
Word::new("a", "bcbc").chars().take(5).collect::<String>(),
"abcbc"
);
let wrd = Word {
base: String::from("asdf"),
ext: String::from("ggggggg"),
};
assert_eq!(wrd, Word::new("asdf", "g"));
let wrd = Word {
base: "".to_string(),
ext: "abcd".to_string(),
};
assert_eq!(wrd, Word::new("ab", "cdab"));
let wrd = Word {
base: String::from("asdfg"),
ext: String::from("ggggggg"),
};
assert_eq!(wrd, Word::new("asdf", "g"));
let wrd = Word {
base: String::from(""),
ext: String::from("b"),
};
assert_eq!(wrd, Word::new("bb", "b"));
}
}
use std::collections::{HashMap, HashSet, VecDeque};
use std::ffi::OsStr;
use std::fs::File;
use std::io::Read;
use std::path::Path;
use std::{cmp::Ordering, error::Error};
use itertools::Itertools;
use log::debug;
use serde::{Deserialize, Serialize};
use crate::automaton::{base, Base, DFA};
use super::word::*;
/// Represents a sample $S = (S^+, S+-)$ for an $\omega$-regular language $L \subseteq L^\omega$. Such a sample
/// consists of ultimately periodic words $\alpha = uv^\omega$ such that $S^+ \subseteq L$ and $S^- \cap L = \emptyset$.
/// Every $\omega$-regular language can be determined fully by the set of its ultimately periodic words and a
/// sample attempts to generalise this set to a subset.
///
/// For example the language $$L = \lbrace \alpha \in \Sigma^\omega: |\alpha|_a = \infty\rbrace$$
/// can be represented by the sample $S = (\lbrace (ba)^\omega\rbrace, \lbrace b^\omega\rbrace)$. Note, however, that
/// such a sample is not necessarily unique and there might exist infinitely many samples $S+ \supset S$ that induce
/// the same language.
#[derive(Debug, Clone)]
pub struct Sample {
pub(crate) pos: Vec<Word>,
pub(crate) neg: Vec<Word>,
}
#[derive(Debug, Deserialize, Serialize)]
struct SampleRecord {
base: String,
ext: String,
flag: usize,
}
pub struct CandidatePairs {
sorted_prefixes: Vec<String>,
right: usize,
left: usize,
}
impl Iterator for CandidatePairs {
type Item = (String, String);
fn next(&mut self) -> Option<Self::Item> {
if self.right < self.sorted_prefixes.len() {
let elem = (
self.sorted_prefixes[self.left].clone(),
self.sorted_prefixes[self.right].clone(),
);
self.left += 1;
if self.left >= self.right {
self.right += 1;
self.left = 0;
}
Some(elem)
} else {
None
}
}
}
impl Sample {
pub fn merge_candidates(&self) -> CandidatePairs {
CandidatePairs {
sorted_prefixes: self.initial_state_prefixes(),
right: 1,
left: 0,
}
}
pub fn words(&self) -> impl Iterator<Item = &Word> + '_ {
self.pos.iter().chain(self.neg.iter())
}
pub fn all_words(&self) -> Vec<&Word> {
self.pos.iter().chain(self.neg.iter()).collect()
}
pub fn symbols(&self) -> HashSet<char> {
let mut out = HashSet::new();
for word in self.words() {
for c in word.bare().chars() {
out.insert(c);
}
}
out
}
pub fn find_unique_by_prefix(&self, pref: &str) -> &Word {
self.pos.iter().find(|wrd| wrd.has_prefix(pref)).unwrap()
}
pub fn prefix_of_positive(&self, pref: &str) -> bool {
self.pos.iter().any(|wrd| wrd.has_prefix(pref))
}
/// Computes the prefixes based on which an initial automaton can then be constructed.
/// These are then sorted ascendingly by their length and lexicographic order, during
/// this duplicates are additionally removed.
pub fn initial_state_prefixes(&self) -> Vec<String> {
let mut queue = VecDeque::new();
queue.push_back("".to_string());
let mut states = HashSet::new();
while let Some(wrd) = queue.pop_front() {
states.insert(String::from(&wrd));
let matching_words = self
.words()
.filter(|word| word.has_prefix(&wrd))
.collect::<Vec<_>>();
let prefix_count = matching_words.len();
if prefix_count > 1 {
for sym in self.symbols() {
let mut new_wrd = String::from(&wrd);
new_wrd.push(sym);
if self.prefix_of_positive(&new_wrd) {
queue.push_back(new_wrd);
}
}
} else {
assert_eq!(
matching_words.len(),
1,
"word is \n{:#?}\n({}):{}",
wrd,
self.words().count(),
self.words().map(|w| w.to_string()).join(",")
);
let s_plus = matching_words.get(0).unwrap();
for i in wrd.len() + 1..s_plus.combined_len() {
states.insert(s_plus.chars().take(i).collect::<String>());
}
}
}
let mut out: Vec<String> = states.into_iter().collect();
out.sort_unstable_by(|a, b| match a.len().cmp(&b.len()) {
Ordering::Less => Ordering::Less,
Ordering::Greater => Ordering::Greater,
Ordering::Equal => a.cmp(b),
});
out.dedup();
out
}
fn from_reader<R: Read>(input: R) -> Result<Sample, Box<dyn Error>> {
let mut pos = Vec::new();
let mut neg = Vec::new();
let mut rdr = csv::ReaderBuilder::new()
.comment(Some(b'#'))
.has_headers(false)
.from_reader(input);
for result in rdr.deserialize() {
let rec: SampleRecord = result?;
// match result {
// Ok(r) => r,
// Err(why) => panic!("failed to deserialize! {}", why),
// };
let wrd = Word::new(&rec.base, &rec.ext);
match rec.flag == 0 {
false => pos.push(wrd),
true => neg.push(wrd),
}
}
Ok(Sample { pos, neg })
}
pub fn from_file<T: AsRef<OsStr>>(filename: T) -> Sample {
let path = Path::new(&filename);
debug!("reading sample file {}", path.display());
let mut file = match File::open(&path) {
Ok(file) => file,
Err(why) => panic!("could not open {}: {}", path.display(), why),
};
let mut s = String::new();
if let Err(why) = file.read_to_string(&mut s) {
panic!("could not read {}: {}", path.display(), why)
};
let pos: Vec<Word> = s
.lines()
.filter(|l| l.starts_with('+'))
.map(|l| l[1..].split(','))
.map(|mut s| Word::new(s.next().unwrap(), s.next().unwrap()).reduced())
.collect();
let neg: Vec<Word> = s
.lines()
.filter(|l| l.starts_with('-'))
.map(|l| l[1..].split(','))
.map(|mut s| Word::new(s.next().unwrap(), s.next().unwrap()).reduced())
.collect();
debug!(
"successfully read sample with {}/{} words",
pos.len(),
neg.len()
);
Sample { pos, neg }
}
pub fn from_csv<T: AsRef<Path>>(filename: T) -> Sample {
let mut pos = Vec::new();
let mut neg = Vec::new();
let rdr = csv::ReaderBuilder::new()
.comment(Some(b'#'))
.has_headers(false)
.from_path(filename);
match rdr {
Ok(mut reader) => {
for result in reader.deserialize() {
let rec: SampleRecord = match result {
Ok(r) => r,
Err(why) => panic!("failed to deserialize! {}", why),
};
let wrd = Word::new(&rec.base, &rec.ext);
match rec.flag == 0 {
false => pos.push(wrd),
true => neg.push(wrd),
}
}
Sample { pos, neg }
}
Err(why) => {
panic!("could not open file: {}", why);
}
}
}
pub fn from_u8(input: &[u8]) -> Result<Sample, Box<dyn Error>> {
Sample::from_reader(input)
}
pub fn to_csv<T: AsRef<Path>>(&self, filename: T) -> Result<(), Box<dyn Error>> {
let writer_builder = csv::WriterBuilder::new()
.has_headers(false)
.from_path(filename);
match writer_builder {
Ok(mut wrtr) => {
for w in &self.pos {
wrtr.serialize(SampleRecord {
base: w.base.clone(),
ext: w.ext.clone(),
flag: 1,
})?;
}
for w in &self.neg {
wrtr.serialize(SampleRecord {
base: w.base.clone(),
ext: w.ext.clone(),
flag: 0,
})?;
}
}
Err(why) => panic!("could not open file for output: {}", why),
}
Ok(())
}
}
impl From<&Sample> for DFA<Base> {
fn from(sample: &Sample) -> Self {
let prefixes = sample.initial_state_prefixes();
let mut mapping = HashMap::new();
let mut out: DFA<Base> = DFA::new(sample.symbols());
let initial = out.add(base(""));
out.set_initial(initial);
mapping.insert("", initial);
let itt = prefixes
.iter()
.skip(1)
.sorted_by(|a, b| match a.len().cmp(&b.len()) {
Ordering::Less => Ordering::Less,
Ordering::Equal => a.cmp(b),
Ordering::Greater => Ordering::Greater,
});
for pref in itt {
let child = out.add(base(pref.clone()));
let sym = pref.chars().last().unwrap();
if let Some(parent) = out.index(&base(&pref[..pref.len() - 1])) {
out.transition(parent, sym, child);
} else {
println!("{} \t {:#?}", pref[..pref.len() - 1].to_string(), out);
panic!(
"error when attempting to find parent for {}({}) on symbol {}",
child, pref, sym
);
}
}
out
}
}
#[cfg(test)]
mod tests {
use std::io::Write;
use super::*;
#[test]
fn test_sample_to_automaton() {
let sample = Sample::from_csv("/home/leon/Code/rspni/samples/inf_a/s.csv");
let aut = DFA::from(&sample);
println!("{}", aut.output_dot());
let mut file = File::create("/home/leon/Code/rspni/samples/inf_a/initial.dot").unwrap();
file.write_all(aut.output_dot().as_bytes()).unwrap();
}
#[test]
fn cycle_test() {
let sample = Sample::from_file("/home/leon/Code/rspni/samples/simple");
let csv_sample = Sample::from_csv("/home/leon/Code/rspni/samples/simple.csv");
let aut = DFA::from(&sample);
for w1 in &sample.pos {
for w2 in &sample.neg {
let d = aut.distinguishable(w1, w2);
assert!(d);
}
}
for w in sample.all_words() {
assert!(csv_sample.all_words().contains(&w));
}
for w in csv_sample.all_words() {
assert!(sample.all_words().contains(&w));
}
}
}
use log::{debug, info};
use crate::inclusions::{OldDAG, OldLoop};
use crate::relation::Relation;
use crate::sample::Sample;
use crate::word::Word;
use crate::zielonka::{InclusionGraph, Loop};
use crate::{
automaton::{base, Base, Class, Index, StateData, DFA},
zielonka::LoopType,
};
use itertools::Itertools;
use std::collections::{HashMap, HashSet};
use std::fmt::Display;
use std::{convert::TryFrom, error::Error};
/// The Learner struct represents an instance of the omega-RPNI algorithm.
/// It stores a sample, the base automaton it induces and maintains the current
/// congruence relation on the states of the base automaton as well as its
/// quotient with regard to the relation.
pub struct Learner {
pub(crate) base_automaton: DFA<Base>,
pub(crate) current_automaton: DFA<Class>,
pub(crate) congruence: Relation<Index>,
pub(crate) sample: Sample,
}
impl TryFrom<String> for Learner {
type Error = ();
fn try_from(value: String) -> Result<Self, Self::Error> {
let sample = Sample::from_csv(value);
let base_aut = DFA::from(&sample);
let cong = Relation::new(0..base_aut.size());
Ok(Learner {
current_automaton: base_aut.quotient(&cong),
base_automaton: base_aut,
congruence: cong,
sample,
})
}
}
impl TryFrom<&[u8]> for Learner {
type Error = Box<dyn Error>;
fn try_from(value: &[u8]) -> Result<Self, Self::Error> {
let sample = Sample::from_u8(value)?;
let base_aut = DFA::from(&sample);
let cong = Relation::new(0..base_aut.size());
Ok(Learner {
current_automaton: base_aut.quotient(&cong),
base_automaton: base_aut,
congruence: cong,
sample,
})
}
}
impl Learner {
/// This method takes a mutable instance of a quotient automaton and ensures that all positive
/// words of the stored sample induce infinite runs.
pub fn extend(&self, mut aut: DFA<Class>) -> DFA<Class> {
let mut escaping: HashMap<Index, Vec<Word>> = (0..aut.size())
.into_iter()
.map(|i| (i, Vec::new()))
.collect();
// iterate over all words that do not yet induce a cycle
for w in self
.sample
.pos
.iter()
.filter(|&wrd| aut.induced_cycle(wrd).is_none())
{
// println!("investigating {}", w);
// find a decomposition of w = b * e such that b is maximal and \delta^*(q_0, b) is defined
match aut.run(aut.initial, w.chars()) {
crate::automaton::RunResult::Successful(_) => {
unreachable!("cannot be reached as w must be leaving...")
}
crate::automaton::RunResult::Leaving(run) => {
// the last element of run must be the state where we're leaving from
let exit = run.last().unwrap();
// additionally the part of w that was consumed should be |run| - 1 long
let remainder: Word = w - (run.len() - 1);
// insert it into the mapping E
escaping.get_mut(exit).unwrap().push(remainder);
}
}
}
// we now iterate over all states for which E is non-empty
for state in (0..aut.size())
.into_iter()
.filter(|i| !escaping.get(i).unwrap().is_empty())
{
aut.transient_extend_for(state, escaping.get(&state).unwrap());
}
aut
}
pub fn display_current(&self) {
self.current_automaton.display();
}
fn find_inconsistent(&self, aut: &DFA<Class>) -> Vec<(&Word, &Word)> {
self.sample
.pos
.iter()
.cartesian_product(self.sample.neg.iter())
.filter(|(w1, w2)| !aut.distinguishable(*w1, *w2))
.collect()
}
fn is_consistent(&self, aut: &DFA<Class>) -> bool {
for pos in &self.sample.pos {
for neg in &self.sample.neg {
if !aut.distinguishable(pos, neg) {
return false;
}
}
}
true
}
fn is_paritizable(&self, aut: &DFA<Class>) -> bool {
self.classified_loops(aut).union_closed()
}
/// The central loop of the learning algorithm, which starts by calculating the set of
/// candidate merges as a set of pairs of prefixes of words induced by the sample. Each
/// candidate is a pair $(u, v)$ with $$u, v \in \lbrace w \in \Sigma^*: w \sqsubseteq \alpha, \alpha \in S^+\cup S^- \rbrace$$
/// and $u \prec_{lex} v$ where $\prec_{lex}$ refers to the lexicographic ordering.
pub fn learn(&mut self, series: bool, base_filename: String) -> DFA<Class> {
let candidates = self.sample.merge_candidates();
let mut iteration: usize = 1;
self.current_automaton
.render(format!("{}.{}", base_filename, 0));
for (c1, c2) in candidates {
let q1 = self.base_automaton.index(&base(c1.clone())).unwrap();
let q2 = self.base_automaton.index(&base(c2.clone())).unwrap();
if self.congruence.contains((q1, q2)) {
continue;
}
let new_cong = self
.base_automaton
.extend_congruence(&self.congruence, q1, q2);
let new_aut = self.base_automaton.quotient(&new_cong);
if self.is_paritizable(&new_aut) {
info!("merge of {} and {} is consistent!", c1.clone(), c2.clone());
if series {
new_aut.render(format!("{}.{}", base_filename, iteration));
}
self.current_automaton = new_aut;
self.congruence = new_cong;
iteration += 1;
} else {
debug!(
"merge of {} and {} is not consistent!",
c1.clone(),
c2.clone()
);
for (w1, w2) in self.find_inconsistent(&new_aut) {
debug!("\tcannot distinguish {} and {}", w1, w2);
}
// if c1 == "" && c2 == "aba" {
// info!("hier passiert jetzt was!");
// new_aut.write_to_file("/tmp/testing.dot");
// info!(
// "merge of {} and {} is not consistent!",
// c1.clone(),
// c2.clone()
// );
// for (w1, w2) in self.find_inconsistent(&new_aut) {
// info!("\tcannot distinguish {} and {}", w1, w2);
// }
// }
}
}
if !series && iteration > 1 {
self.current_automaton
.render(format!("{}.{}", base_filename, iteration));
}
self.extend(self.base_automaton.quotient(&self.congruence))
.render(format!("{}.{}", base_filename, iteration + 1));
self.base_automaton.quotient(&self.congruence)
}
fn loop_compatible(&self, l: &Loop, w: &Word) -> bool {
if let Some(fromstate) = self
.current_automaton
.advance(self.current_automaton.initial, w.base.chars())
{
l.states().contains(&fromstate) && l.words().contains(&w.ext)
} else {
false
}
}
/// Computes the set of all possible loops in `aut`, classified into accepting, rejecting and
/// undetermined ones based on the stored sample. The resulting loops are then assembled
/// into a InclusionGraph which modeling the inclusion relation between them.
pub fn classified_loops(&self, aut: &DFA<Class>) -> InclusionGraph {
let classified = self
.sample
.pos
.iter()
.filter_map(|w| aut.loop_for(w, LoopType::Accepting))
.chain(
self.sample
.neg
.iter()
.filter_map(|w| aut.loop_for(w, LoopType::Rejecting)),
)
.collect_vec();
info!("classified: {:?}", classified);
let all_loops = aut
.elementary_loops()
.into_iter()
.filter(|l| classified.iter().all(|ll| !ll.coincides(l)));
info!("all: {:?}", all_loops);
InclusionGraph {
loops: aut.close(all_loops.chain(classified.iter().cloned()).collect()),
}
}
/// Computes a parity function for the current automaton, i.e. a mapping from state-symbol
/// pairs to positive integers.
pub fn parity(&self) -> Option<HashMap<(usize, char), usize>> {
let ploops = self
.sample
.pos
.iter()
.map(|wrd| self.current_automaton.induced_cycle(wrd))
.filter(|cyc| cyc.is_some())
.map(|cyc| OldLoop {
edges: cyc.unwrap().iter().map(|(q, c)| (*q, *c)).collect_vec(),
accepting: true,
})
.collect::<HashSet<_>>();
let nloops = self
.sample
.neg
.iter()
.map(|wrd| self.current_automaton.induced_cycle(wrd))
.filter(|cyc| cyc.is_some())
.map(|cyc| OldLoop {
edges: cyc.unwrap().iter().map(|(q, c)| (*q, *c)).collect_vec(),
accepting: false,
})
.collect::<HashSet<_>>();
let combined = ploops
.into_iter()
.chain(nloops.into_iter())
.collect::<HashSet<_>>();
let dag = OldDAG {
inclusions: combined
.iter()
.cartesian_product(combined.iter())
.filter(|(l1, l2)| l1 < l2)
.collect(),
loops: combined.iter().collect(),
};
dag.parity()
}
/// Returns true if and only if the passed automaton can be turned into a DFA
/// that respects the positive and negative words of the stored sample.
pub fn paritizable<S: StateData + Display>(&self, aut: &DFA<S>) -> bool {
let ploops = self
.sample
.pos
.iter()
.map(|wrd| aut.induced_cycle(wrd))
.filter(|cyc| cyc.is_some())
.map(|cyc| OldLoop {
edges: cyc.unwrap().iter().map(|(q, c)| (*q, *c)).collect_vec(),
accepting: true,
})
.collect::<HashSet<_>>();
let nloops = self
.sample
.neg
.iter()
.map(|wrd| aut.induced_cycle(wrd))
.filter(|cyc| cyc.is_some())
.map(|cyc| OldLoop {
edges: cyc.unwrap().iter().map(|(q, c)| (*q, *c)).collect_vec(),
accepting: false,
})
.collect::<HashSet<_>>();
let combined = ploops
.into_iter()
.chain(nloops.into_iter())
.collect::<HashSet<_>>();
let dag = OldDAG {
inclusions: combined
.iter()
.cartesian_product(combined.iter())
.filter(|(l1, l2)| l1 < l2)
.collect(),
loops: combined.iter().collect(),
};
println!("{}", dag);
for (l1, l2) in &dag.inclusions {
assert!(l1.edges.len() <= l2.edges.len());
}
println!("{:?}", dag.parity());
dag.loops
.iter()
.cartesian_product(dag.loops.iter())
.filter(|(l1, l2)| l1 != l2 && l1.accepting == l2.accepting)
.all(|(l1, l2)| {
dag.loops
.iter()
.filter(|supl| l1 < *supl && l2 < *supl)
.all(|supl| supl.accepting == l1.accepting)
})
}
}
#[cfg(test)]
mod tests {
use std::{collections::HashSet, convert::TryInto};
use crate::zielonka::ZielonkaPath;
use super::*;
#[test]
fn consistency_test() {
let sample_path = "/home/leon/Code/rspni/samples/inf_a/".to_string();
let sample = Sample::from_file(format!("{}s.csv", sample_path));
let mut aut = DFA::new(vec!['a', 'b'].into_iter().collect::<HashSet<char>>());
let a = aut.add(base(""));
let b = aut.add(base("b"));
aut.set_initial(a);
aut.transition(a, 'a', a);
aut.transition(a, 'b', b);
aut.transition(b, 'a', a);
aut.transition(b, 'b', b);
aut.write_to_file(format!("{}testaut.base", sample_path));
for pos in &sample.pos {
for neg in &sample.neg {
let res = aut.distinguishable(pos, neg);
if !res {
println!("cannot distinguish {} and {}", pos, neg);
}
}
}
}
#[test]
fn simple_display_test() {
let sample_path = "/home/leon/Code/rspni/samples/inf_a/s.csv".to_string();
let mut l = Learner::try_from(sample_path.to_string()).unwrap();
l.base_automaton
.write_to_file(format!("{}.base", sample_path));
l.current_automaton
.write_to_file(format!("{}.out", sample_path));
let final_result = l.learn(true, sample_path.clone());
final_result.write_to_file(format!("{}.zfinal", sample_path));
}
#[test]
fn parity_computation_test() {
let bfn = "/home/leon/Code/rspni/samples/inf_doublea/s.csv".to_string();
let mut l = Learner::try_from(bfn.clone()).unwrap();
let result = l.learn(false, bfn);
println!("learned");
result.write_to_file("/tmp/now.dot");
let ress = l.extend(result);
ress.write_to_file("/tmp/autt.dot");
// for ((q, sym), c) in l.parity().unwrap() {
// println!("c({}, {}) = {}", q, sym, c);
// }
// println!("{}", result.output_tpa_hoa(&l.parity().unwrap()));
}
#[test]
fn fin_b_classification_test() {
let thesample = "b,a,1\n,ba,0\n";
let mut l = Learner::try_from(thesample.as_bytes()).unwrap();
let result = l.learn(false, "/tmp/temp".to_string());
result.write_to_file("/tmp/fin_b_class_out.dot");
let ig = l.classified_loops(&result);
let zp: ZielonkaPath = ig.try_into().unwrap();
assert_eq!(zp.len(), 2);
let parities = zp.parities();
assert_eq!(
parities.values().collect::<HashSet<_>>(),
vec![&1, &2].into_iter().collect::<HashSet<_>>()
);
}
#[test]
fn inf_a_classification_test() {
let thesample = ",b,0\n,ba,1\n";
let mut l = Learner::try_from(thesample.as_bytes()).unwrap();
let result = l.learn(false, "/tmp/temp".to_string());
result.write_to_file("/tmp/inf_a_class_out.dot");
let ig = l.classified_loops(&result);
let zp: ZielonkaPath = ig.try_into().unwrap();
assert_eq!(zp.len(), 2);
let parities = zp.parities();
assert_eq!(
parities.values().collect::<HashSet<_>>(),
vec![&0, &1].into_iter().collect::<HashSet<_>>()
);
}
#[test]
fn inf_doublea_classification_test() {
let thesample = ",a,1\n,ba,0\n,bba,0\n,bbaa,1\n";
let mut l = Learner::try_from(thesample.as_bytes()).unwrap();
let result = l.learn(false, "/tmp/temp".to_string());
result.write_to_file("/tmp/fin_b_class_out.dot");
let ig = l.classified_loops(&result);
let zp: ZielonkaPath = ig.try_into().unwrap();
assert_eq!(zp.len(), 2);
let parities = zp.parities();
assert_eq!(
parities.values().collect::<HashSet<_>>(),
vec![&0, &1].into_iter().collect::<HashSet<_>>()
);
}
#[test]
fn loop_classification_test() {
let fname = "/home/leon/Code/rspni/samples/fin_b/s.csv".to_string();
let l = Learner::try_from(fname).unwrap();
let res = l.extend(l.base_automaton.quotient(&l.congruence));
res.write_to_file("/tmp/temp.dot");
for l in res.elementary_loops() {
println!("found {}", l);
}
println!("=====");
for l in res.close(res.elementary_loops()) {
println!("found {}", l);
}
println!("+++++");
let cloops = l.classified_loops(&res);
for w in &l.sample.pos {
println!("checking word {}", w);
if let Some(lp) = res.loop_for(w, LoopType::Accepting) {
println!("\tfound {}", lp);
assert!(cloops.loops.contains(&lp));
}
}
for w in &l.sample.neg {
println!("checking word {}", w);
if let Some(lp) = res.loop_for(w, LoopType::Rejecting) {
println!("\tfound {}", lp);
assert!(cloops.loops.contains(&lp));
}
}
println!("%%%%%");
for l in &cloops.loops {
println!("loop: {}", l);
}
let ig = l.classified_loops(&res);
println!("{}", ig);
let zp: ZielonkaPath = ig.try_into().unwrap();
println!("{}", zp);
for (t, p) in zp.parities() {
println!("({}, {}, {}) => {}", t.0, t.1, t.2, p);
}
}
}
use itertools::Itertools;
use std::collections::{HashMap, HashSet};
use std::fmt::{Debug, Display, Formatter, Result};
use std::hash::Hash;
pub trait Relatable: Eq + PartialEq + Debug + Copy + Hash + Display {}
impl Relatable for usize {}
pub trait BinaryRelation<S: Relatable> {
fn new(dim: usize) -> Self;
fn is(&self, left: &S, right: &S) -> bool;
fn add(&mut self, left: S, right: S) -> bool;
fn extend<I: Iterator<Item = (S, S)>>(&mut self, it: I) -> bool;
fn trans(&mut self);
fn class(&self, elem: S) -> HashSet<S>;
fn all_classes(&self) -> Vec<HashSet<S>>;
}
#[derive(Debug)]
pub struct BooleanTable {
values: Vec<bool>,
dimension: usize,
}
impl BooleanTable {
pub fn pairs(&self) -> impl Iterator<Item = (usize, usize)> + '_ {
(0..self.dimension)
.into_iter()
.cartesian_product((0..self.dimension).into_iter())
.filter(move |(x, y)| self.is(x, y))
}
}
impl BinaryRelation<usize> for BooleanTable {
fn new(dim: usize) -> Self {
BooleanTable {
values: std::iter::repeat(false).take(dim * dim + 1).collect(),
dimension: dim,
}
}
fn is(&self, left: &usize, right: &usize) -> bool {
self.values[self.dimension * left + right]
}
fn add(&mut self, left: usize, right: usize) -> bool {
let position_1 = self.dimension * left + right;
let position_2 = self.dimension * right + left;
let was = self.values[position_1] | self.values[position_2];
self.values[position_1] = true;
self.values[position_2] = true;
!was
}
fn extend<I: Iterator<Item = (usize, usize)>>(&mut self, mut it: I) -> bool {
// let mut was = false;
// for (p1, p2) in it {
// was |= self.add(p1, p2);
// }
it.any(|(p1, p2)| self.add(p1, p2))
}
fn trans(&mut self) {
for i in 0..self.dimension {
for j in 0..self.dimension {
for k in 0..self.dimension {
let ij = i * self.dimension + j;
let ik = i * self.dimension + k;
let ji = j * self.dimension + i;
let jk = j * self.dimension + k;
let ki = k * self.dimension + i;
let kj = k * self.dimension + j;
if self.values[ij] && self.values[jk] {
self.values[ik] = true;
}
if self.values[ik] && self.values[kj] {
self.values[ij] = true;
}
if self.values[ji] && self.values[ik] {
self.values[jk] = true;
}
if self.values[jk] && self.values[ki] {
self.values[ji] = true;
}
if self.values[ki] && self.values[ij] {
self.values[kj] = true;
}
if self.values[kj] && self.values[ji] {
self.values[ki] = true;
}
}
}
}
}
fn class(&self, elem: usize) -> HashSet<usize> {
(0usize..self.dimension)
.into_iter()
.filter(|c| self.values[*c + elem * self.dimension])
.collect::<HashSet<usize>>()
}
fn all_classes(&self) -> Vec<HashSet<usize>> {
let mut out = Vec::new();
let mut seen: HashSet<usize> = HashSet::new();
for i in 0usize..self.dimension {
if !seen.contains(&i) {
let cls = self.class(i);
if !cls.is_empty() {
seen.extend(cls.iter());
out.push(cls);
}
}
}
out
}
}
#[derive(Debug)]
pub struct AdjMat<S> {
ptr: Vec<Vec<bool>>,
dim: usize,
mapping: HashMap<usize, S>,
gnippam: HashMap<S, usize>,
}
impl<S: Relatable> From<&Relation<S>> for AdjMat<S> {
fn from(r: &Relation<S>) -> Self {
let n = r.universe.len();
let mut ptr: Vec<Vec<bool>> = Vec::new();
for _ in 0..n {
ptr.push(vec![false].repeat(n))
}
let mut mapping = HashMap::new();
let mut gnippam = HashMap::new();
for (i, j) in r.universe.iter().enumerate() {
mapping.insert(i, *j);
gnippam.insert(*j, i);
}
for (p1, p2) in &r.pairs {
ptr[*gnippam.get(p1).unwrap()][*gnippam.get(p2).unwrap()] = true;
}
AdjMat {
ptr,
dim: n,
mapping,
gnippam,
}
}
}
impl<S: Relatable> Display for Relation<S> {
fn fmt(&self, f: &mut Formatter<'_>) -> Result {
for class in self.classes() {
write!(
f,
"[{}]",
class.iter().map(|e| e.to_string()).join(",").to_string()
)?;
}
Ok(())
}
}
impl<S: Relatable> From<AdjMat<S>> for Relation<S> {
fn from(am: AdjMat<S>) -> Self {
let mut r = Relation::new(vec![].into_iter());
for i in 0..am.dim {
for j in 0..am.dim {
if am.ptr[i][j] {
r.add(*am.mapping.get(&i).unwrap(), *am.mapping.get(&j).unwrap());
}
}
}
r
}
}
impl<S> AdjMat<S> {
pub fn transitive(&mut self) {
for i in 0..self.dim {
for j in 0..self.dim {
for k in 0..self.dim {
if self.ptr[i][j] && self.ptr[j][k] {
self.ptr[i][k] = true;
}
if self.ptr[i][k] && self.ptr[k][j] {
self.ptr[i][j] = true;
}
if self.ptr[j][i] && self.ptr[i][k] {
self.ptr[j][k] = true;
}
if self.ptr[j][k] && self.ptr[k][i] {
self.ptr[j][i] = true;
}
if self.ptr[k][j] && self.ptr[j][i] {
self.ptr[k][i] = true;
}
if self.ptr[k][i] && self.ptr[i][j] {
self.ptr[k][j] = true;
}
}
}
}
}
}
#[derive(Debug)]
pub struct Relation<S: Eq + Hash + PartialEq> {
pub(crate) pairs: HashSet<(S, S)>,
universe: HashSet<S>,
}
impl<S: Relatable> Relation<S> {
pub fn new<I: Iterator<Item = S> + Clone>(universe: I) -> Relation<S> {
Relation {
pairs: universe.clone().map(|e| (e, e)).collect(),
universe: universe.into_iter().collect(),
}
}
pub fn contains(&self, pair: (S, S)) -> bool {
pair.0 == pair.1 || self.pairs.contains(&pair)
}
pub fn add(&mut self, a: S, b: S) {
self.pairs.extend(vec![(a, b), (b, a), (a, a), (b, b)]);
self.universe.extend(vec![a, b].into_iter());
}
pub fn right_all<'a>(&'a self, a: &'a S) -> impl Iterator<Item = &S> + 'a {
self.pairs
.iter()
.filter(move |(i, _j)| i == a)
.map(|(_i, j)| j)
}
pub fn left_all<'a>(&'a self, a: &'a S) -> impl Iterator<Item = &S> + 'a {
self.pairs
.iter()
.filter(move |(_i, j)| j == a)
.map(|(i, _j)| i)
}
pub fn transitive(&self) -> Self {
let mut am = AdjMat::from(self);
am.transitive();
Relation::from(am)
}
pub fn classes(&self) -> Vec<HashSet<&S>> {
let mut out = Vec::new();
let mut seen = HashSet::new();
for e in &self.universe {
if seen.insert(e) {
let cls = self.right_all(e).collect::<HashSet<_>>();
seen.extend(cls.iter());
out.push(cls);
}
}
out
}
pub fn class_for(&self, what: S) -> HashSet<S> {
self.pairs
.iter()
.filter(|(l, _r)| *l == what)
.map(|(_, r)| *r)
.collect()
}
pub fn map(&self, mapping: &HashMap<S, S>) -> Relation<S> {
Relation {
pairs: self
.pairs
.iter()
.map(|(p1, p2)| (*mapping.get(p1).unwrap(), *mapping.get(p2).unwrap()))
.collect(),
universe: mapping.values().copied().collect(),
}
}
pub fn unmap(&self, mapping: &HashMap<S, S>) -> Relation<S> {
let mut out_pairs = HashSet::new();
let mut out_universe = HashSet::new();
for (p1, p2) in &self.pairs {
for m1 in mapping.keys() {
for m2 in mapping.keys() {
if mapping.get(m1).unwrap() == p1 && mapping.get(m2).unwrap() == p2 {
out_pairs.insert((*m1, *m2));
}
}
}
}
for e in &self.universe {
for m in mapping.keys() {
if mapping.get(m).unwrap() == e {
out_universe.insert(*m);
}
}
}
Relation {
pairs: out_pairs,
universe: out_universe,
}
}
}
impl<S: Relatable> PartialEq for Relation<S> {
fn eq(&self, other: &Self) -> bool {
self.universe == other.universe && self.pairs == other.pairs
}
}
impl<S: Relatable> Eq for Relation<S> {}
#[cfg(test)]
mod tests {
use std::time::Instant;
use super::*;
#[test]
fn map_test() {
let mut r = Relation::new(vec![0, 1, 2, 3, 4usize].into_iter());
r.add(0, 2);
r.add(2, 4);
r.add(1, 3);
let mut mapping = HashMap::new();
mapping.insert(0, 1);
mapping.insert(1, 0);
mapping.insert(2, 2);
mapping.insert(3, 3);
mapping.insert(4, 4);
println!("{}", r.transitive());
println!("{}", r.map(&mapping).transitive());
println!("{}", r.map(&mapping).unmap(&mapping).transitive());
}
#[test]
fn simple_rel_test() {
const TESTSIZE: usize = 90;
let mut r = Relation::new(vec![1usize].into_iter());
for i in 1usize..TESTSIZE {
r.add(i, i + 1);
}
assert_eq!(r.universe, (1..TESTSIZE + 1).collect());
assert!(!r.contains((1, TESTSIZE)));
let start_one = Instant::now();
let rr = r.transitive();
let duration_one = start_one.elapsed();
println!("calculation of transitive closure took {:?}", duration_one);
for i in 1usize..TESTSIZE {
assert!(rr.contains((1, i)));
}
assert_eq!(rr.classes().len(), 1);
}
#[test]
fn classes_test() {
let mut r = Relation::new(vec![0, 1, 2, 3, 4, 5, 6, 7].into_iter());
r.add(0, 1);
r.add(1usize, 2);
r.add(2, 4);
r.add(3, 5);
r.add(5, 7);
r.add(2, 6);
let mut rr = BooleanTable::new(8);
rr.add(0, 1);
rr.add(1, 2);
rr.add(2, 4);
rr.add(3, 5);
rr.add(5, 7);
rr.add(2, 6);
println!("{:?}", r);
let t = r.transitive();
println!("{:?}", t);
println!("{:?}", t.classes());
rr.trans();
println!("{:?}", rr.all_classes());
}
#[test]
fn adj_mat_test() {
let mut r = Relation::new(vec![1, 2, 4].into_iter());
r.add(1usize, 4);
r.add(2, 4);
r.add(1, 2);
let mut am = AdjMat::from(&r);
println!("{:#?}", am);
am.transitive();
println!("{:#?}", am);
}
#[test]
fn new_relations_test() {
const TESTSIZE: usize = 90;
let mut r = BooleanTable::new(TESTSIZE);
for i in 0usize..(TESTSIZE - 1) {
r.add(i, i + 1);
}
assert!(!r.is(&1, &(TESTSIZE - 1)));
let start_one = Instant::now();
r.trans();
let duration_one = start_one.elapsed();
println!("calculation of transitive closure took {:?}", duration_one);
for i in 0usize..TESTSIZE {
assert!(r.is(&0, &i));
}
assert_eq!(r.all_classes().len(), 1);
}
#[test]
fn extend_test() {
let mut r = BooleanTable::new(10);
assert!(r.add(0, 9));
assert!(r.add(1, 9));
assert!(r.add(7, 1));
assert!(r.add(2, 9));
assert!(r.add(5, 2));
r.trans();
assert!(!r.add(1, 5));
}
}
#![allow(dead_code)]
use crate::rspni::Learner;
use clap::App;
use itertools::Itertools;
use log::{error, info};
use std::convert::{TryFrom, TryInto};
use std::path::Path;
use std::process::{exit, Command};
use zielonka::{LoopType, ZielonkaPath};
mod automaton;
mod bitset;
mod holecondition;
mod inclusions;
mod relation;
mod rspni;
pub mod sample;
pub mod word;
mod zielonka;
fn configure_logger(verbose: bool) {
let level = if verbose {
log::LevelFilter::Debug
} else {
log::LevelFilter::Info
};
fern::Dispatch::new()
.format(|out, message, record| {
out.finish(format_args!(
"{}[{}][{}] {}",
chrono::Local::now().format("[%H:%M:%S]"),
record.target(),
record.level(),
message
))
})
.level(level)
.chain(std::io::stderr())
.apply()
.unwrap();
}
fn main() {
let matches = App::new("rspni")
.version("0.1")
.author("Leon Bohn <bohn@lics.rwth-aachen.de>")
.about("yeah you know")
.arg("<input> 'gives sample input file'")
.arg("-v, --verbose 'turn on debugging information'")
.arg("-s, --series 'print out a series of intermediate automata'")
.arg("-d, --display 'show the initial automaton as well as the result'")
.arg("-o, --output=[DIR] 'directory to put the dot file(s) into")
.arg("-c, --combine 'do not merge generated files into one'")
.arg("-h, --nohoa 'do not output HOA representation of final automaton'")
.arg("--sample-print 'print sample that was read (words in reduced form)'")
.get_matches();
let verbose = matches.is_present("verbose");
let series = matches.is_present("series");
let display = matches.is_present("display");
let combine = !matches.is_present("combine");
let print_hoa = !matches.is_present("nohoa");
configure_logger(verbose);
let infile = matches.value_of("input").unwrap().to_string();
let out_dir = match matches.value_of("output") {
None => infile.clone(),
Some(dir) => {
if !Path::new(dir).is_dir() {
std::fs::create_dir_all(dir).expect("could not create outdir");
}
format!("{}/{}", dir, infile)
}
};
let mut l = match Learner::try_from(infile.clone()) {
Ok(l) => l,
Err(_) => panic!("could not create learner!"),
};
if matches.is_present("sample-print") {
info!("printing sample as requested:");
info!(
"positive:\n\t{}",
l.sample.pos.iter().map(|w| w.to_string()).join("\n\t")
);
info!(
"negative:\n\t{}",
l.sample.neg.iter().map(|w| w.to_string()).join("\n\t")
);
}
let learned_automaton = l.learn(series, out_dir);
let final_automaton = l.extend(learned_automaton);
if display {
l.base_automaton.display();
l.display_current();
}
if combine {
Command::new("sh")
.arg("-c")
.arg(format!(
"convert *.tmp.png +append {}.png && rm *.tmp.png",
infile
))
.spawn()
.expect("could not combine images...");
}
// output HOA representation
if print_hoa {
match TryInto::<ZielonkaPath>::try_into(l.classified_loops(&final_automaton)) {
Ok(zp) => println!("{}", final_automaton.output_tpa_hoa(&zp.parities())),
Err(why) => {
println!("cannot paritize {}. outputting helpful info", why);
for wrd in &l.sample.pos {
if let Some(cyc) = l.current_automaton.loop_for(wrd, LoopType::Accepting) {
println!(
"+{} => {}",
wrd,
cyc.edges
.iter()
.map(|(q, c, p)| format!(
"({}, {}, {})",
*q, //l.current_automaton.state(*q).unwrap().data,
*c,
*p
))
.join(",")
)
}
}
for wrd in &l.sample.neg {
if let Some(cyc) = l.current_automaton.loop_for(wrd, LoopType::Rejecting) {
println!(
"-{} => {}",
wrd,
cyc.edges
.iter()
.map(|(q, c, p)| format!(
"({}, {}, {})",
*q, //l.current_automaton.state(*q).unwrap().data,
*c,
*p
))
.join(",")
)
}
}
error!("paritization not possible!");
exit(1);
}
}
}
}
use crate::automaton::Index;
use itertools::Itertools;
use log::debug;
use std::cmp::Ordering;
use std::collections::{HashMap, HashSet};
use std::fmt::{Display, Formatter, Result};
use std::hash::Hash;
#[derive(Debug, Clone, Hash, Eq, PartialEq)]
pub struct OldLoop {
pub(crate) edges: Vec<(Index, char)>,
pub(crate) accepting: bool,
}
impl OldLoop {
pub fn new(edges: Vec<(Index, char)>, accepting: bool) -> OldLoop {
OldLoop { edges, accepting }
}
pub fn is_union(&self, left: &OldLoop, right: &OldLoop) -> bool {
left < self
&& right < self
&& left
.edges
.iter()
.chain(right.edges.iter())
.collect::<HashSet<_>>()
== self.edges.iter().collect()
}
}
impl Display for OldLoop {
fn fmt(&self, f: &mut Formatter<'_>) -> Result {
write!(f, "acc = {} | {:?}", self.accepting, self.edges)
}
}
impl PartialOrd for OldLoop {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
let ses = self.edges.iter().collect::<HashSet<_>>();
let oes = other.edges.iter().collect::<HashSet<_>>();
match ses.intersection(&oes).next().is_none() {
true => None,
false => {
if ses.is_subset(&oes) {
Some(Ordering::Less)
} else if ses.is_superset(&oes) {
Some(Ordering::Greater)
} else {
None
}
}
}
}
}
pub struct OldDAG<'a> {
pub(crate) loops: HashSet<&'a OldLoop>,
pub(crate) inclusions: HashSet<(&'a OldLoop, &'a OldLoop)>,
}
impl<'a> OldDAG<'a> {
pub fn new() -> OldDAG<'a> {
OldDAG {
loops: HashSet::new(),
inclusions: HashSet::new(),
}
}
pub fn add(&mut self, l: &'a OldLoop) -> bool {
self.loops.insert(l)
}
pub fn map(&mut self, subset: &'a OldLoop, supset: &'a OldLoop) -> bool {
assert!(self.loops.contains(subset) && self.loops.contains(supset));
self.inclusions.insert((subset, supset))
}
pub fn is(&self, subset: &'a OldLoop, supset: &'a OldLoop) -> bool {
assert!(self.loops.contains(subset) && self.loops.contains(supset));
self.inclusions.contains(&(subset, supset))
}
pub fn width(&self) -> usize {
let mut out = 0usize;
for l in &self.loops {
let mut inner_counter = 0usize;
let mut current = l;
while let Some((_c, n)) = self.inclusions.iter().find(|(l, _)| l == current) {
current = n;
inner_counter += 1;
}
out = std::cmp::max(out, inner_counter);
}
out
}
fn minima(&self) -> HashSet<&OldLoop> {
self.loops
.iter()
.filter(|l| !self.loops.iter().any(|l2| *l != l2 && l2 < *l))
.copied()
.collect()
}
fn maxima(&self) -> HashSet<&OldLoop> {
self.loops
.iter()
.filter(|l| !self.loops.iter().any(|l2| *l != l2 && *l < l2))
.copied()
.collect()
}
fn immediate(&self, left: &OldLoop, right: &OldLoop) -> bool {
left != right
&& left < right
&& !self
.loops
.iter()
.any(|&other| other != left && other != right && left < other && other < right)
}
fn up_chain(&self, from: &'a OldLoop) -> Vec<&'a OldLoop> {
let mut current = from;
let mut out = vec![current];
while let Some(next) = self.loops.iter().find(|&&l| self.immediate(current, l)) {
out.push(*next);
current = *next;
}
out
}
fn down_chain(&self, from: &'a OldLoop) -> Vec<&'a OldLoop> {
let mut current = from;
let mut out = vec![current];
while let Some(next) = self.loops.iter().find(|&&l| self.immediate(l, current)) {
out.push(*next);
current = *next;
}
out
}
fn alternations(&self, chain: Vec<&OldLoop>) -> usize {
let mut it = chain.iter();
let mut current = it.next().unwrap().accepting;
let mut out = 1usize;
for next in it {
if next.accepting != current {
current = next.accepting;
out += 1;
}
}
out
}
pub fn parity_index(&self) -> usize {
self.minima()
.iter()
.map(|m| self.alternations(self.up_chain(*m)))
.max()
.unwrap_or(1)
}
fn find_union_loop(&self, left: &OldLoop, right: &OldLoop) -> &OldLoop {
*self.loops.iter().find(|&&l| left < l && right < l).unwrap()
}
pub fn parity(&self) -> Option<HashMap<(Index, char), usize>> {
let paritizable = self
.loops
.iter()
.cartesian_product(self.loops.iter())
.filter(|(l1, l2)| l1 != l2 && l1.accepting == l2.accepting && (l1 < l2 || l2 < l1))
.all(|(l1, l2)| {
self.loops
.iter()
.filter(|supl| supl.is_union(*l1, *l2))
.all(|supl| supl.accepting == l1.accepting)
});
debug!("checking paritizability of {}", self);
for (l1, l2) in self
.loops
.iter()
.cartesian_product(self.loops.iter())
.filter(|(l1, l2)| l1 != l2 && l1.accepting == l2.accepting)
{
for supl in self.loops.iter().filter(|sup| sup.is_union(*l1, *l2)) {
if supl.accepting != l1.accepting || supl.accepting != l2.accepting {
println!("{} U {}", l1, l2);
println!("=> {}", supl);
}
}
}
if paritizable {
let mut out = HashMap::new();
let pi = self.parity_index();
for minh in self.minima() {
let dc = self.up_chain(minh);
let mut current = pi;
let mut it = dc.iter();
let mut prev = it.next().unwrap().accepting;
if prev && current % 2 != 0 {
current -= 1;
}
if !prev && current % 2 == 0 {
if current > 0 {
current -= 1;
} else {
current = 1;
}
}
for (q, c) in &dc.first().unwrap().edges {
out.insert((*q, *c), current);
}
for l in it {
if l.accepting != prev {
prev = !prev;
current -= 1;
}
for (q, c) in &l.edges {
out.entry((*q, *c)).or_insert(current);
}
}
}
Some(out)
} else {
None
}
}
}
impl Display for OldDAG<'_> {
fn fmt(&self, f: &mut Formatter<'_>) -> Result {
let mut mapping = HashMap::new();
let mut gnippam = HashMap::new();
for (it, l) in self.loops.iter().enumerate() {
writeln!(f, "{}: {}", it, l)?;
mapping.insert(l, it);
gnippam.insert(it, l);
}
for l in self.minima() {
writeln!(
f,
"{} ⊆ {}",
mapping.get(&l).unwrap(),
self.loops
.iter()
.filter(|ll| l < **ll)
.map(|ll| mapping.get(ll).unwrap().to_string())
.join(", ")
)?;
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::automaton::{base, DFA};
use crate::relation::Relation;
use crate::rspni::Learner;
use crate::sample::Sample;
use crate::word::Word;
#[test]
fn automaton_test() {
let a = 'a';
let b = 'b';
let mut aut = DFA::new(vec![a, b].into_iter().collect());
let q0 = aut.add(base("q0"));
aut.set_initial(q0);
let q1 = aut.add(base("q1"));
let q2 = aut.add(base("q2"));
aut.transition(q0, a, q1);
aut.transition(q0, b, q2);
aut.transition(q1, a, q0);
aut.transition(q1, b, q1);
aut.transition(q2, a, q0);
aut.transition(q2, b, q2);
println!("{:?}", aut.states);
let s = Sample {
pos: vec![
Word::new("", "a"),
Word::new("", "aba"),
Word::new("", "ba"),
Word::new("", "bba"),
Word::new("", "ababba"),
Word::new("", "aaba"),
Word::new("", "ababa"),
Word::new("", "bbaaa"),
],
neg: vec![],
};
let cong = Relation::new(0..aut.size());
let l = Learner {
current_automaton: aut.quotient(&cong),
base_automaton: aut,
congruence: cong,
sample: s,
};
println!("{:?}", l.base_automaton.cycle(&Word::new("", "aba")));
println!("paritizable = {}", l.paritizable(&l.base_automaton));
}
#[test]
fn simple_width_test() {
let mut dag = OldDAG::new();
let l1 = OldLoop::new(vec![(1, 'a')], true);
let l2 = OldLoop::new(vec![(1, 'b')], true);
let l3 = OldLoop::new(vec![(1, 'c')], true);
let l4 = OldLoop::new(vec![(1, 'd')], true);
let l5 = OldLoop::new(vec![(1, 'e')], true);
let l6 = OldLoop::new(vec![(1, 'f')], true);
dag.add(&l1);
dag.add(&l2);
dag.add(&l3);
dag.add(&l4);
dag.add(&l5);
dag.add(&l6);
dag.map(&l1, &l2);
dag.map(&l2, &l5);
dag.map(&l3, &l4);
dag.map(&l5, &l4);
dag.map(&l5, &l6);
println!("{}", dag.width());
}
}
use std::{
collections::{HashMap, HashSet},
convert::TryFrom,
fmt::{Debug, Display},
ops::{BitAnd, BitOr, Not},
};
pub trait SetElement: Debug + Eq + PartialEq {}
use itertools::Itertools;
use crate::{automaton::Transition, bitset::BitSet};
#[derive(Debug, Eq, PartialEq, Hash)]
pub struct Subset<'a, A: SetElement> {
universe: &'a [A],
mask: BitSet,
}
pub struct Elements<'a, A: SetElement> {
set: &'a Subset<'a, A>,
pos: usize,
}
impl<'a, A: SetElement> Iterator for Elements<'a, A> {
type Item = &'a A;
fn next(&mut self) -> Option<Self::Item> {
while self.pos < self.set.universe_size() {
if !self.set.mask.is(self.pos) {
self.pos += 1;
} else {
let out = &self.set.universe[self.pos];
self.pos += 1;
return Some(out);
}
}
None
}
}
impl<'a, A: SetElement> Subset<'a, A> {
pub fn new<'b>(universe: &'a [A], elements: &'b [A]) -> Subset<'a, A> {
Subset {
universe: universe,
mask: BitSet::try_from(
elements
.iter()
.map(|e| {
universe
.iter()
.enumerate()
.find(|(_l, r)| e == *r)
.unwrap()
.0
})
.collect_vec(),
)
.unwrap_or(BitSet::zeroes(universe.len())),
}
}
fn index(&self, element: &A) -> Option<usize> {
self.universe
.iter()
.enumerate()
.find(|(_l, r)| *r == element)
.map(|(l, _r)| l)
}
pub fn contains(&self, what: &A) -> bool {
self.elements().any(|x| what == x)
}
pub fn size(&self) -> usize {
self.elements().count()
}
fn remove(&mut self, what: &A) {
if let Some(pos) = self.index(what) {
self.mask = self
.mask
.intersection(&!&BitSet::just(self.mask.size(), pos))
} else {
panic!("element {:?} not found!", what)
}
}
fn add(&mut self, what: &A) {
if let Some(pos) = self.index(what) {
self.mask = self.mask.union(&BitSet::just(self.mask.size(), pos))
} else {
panic!("element {:?} not found!", what)
}
}
pub fn bare(universe: &'a [A]) -> Subset<'a, A> {
Subset {
universe: universe,
mask: BitSet::ones(universe.len()),
}
}
pub fn empty(universe: &'a [A]) -> Subset<'a, A> {
Subset {
universe: universe,
mask: BitSet::zeroes(universe.len()),
}
}
pub fn universe_size(&self) -> usize {
self.universe.len()
}
pub fn elements(&self) -> Elements<A> {
Elements { set: &self, pos: 0 }
}
}
impl<'a, A: SetElement> BitAnd<&'a Subset<'a, A>> for &'a Subset<'a, A> {
type Output = Subset<'a, A>;
fn bitand(self, rhs: &'a Subset<'a, A>) -> Self::Output {
assert_eq!(
self.universe, rhs.universe,
"universes must match for & operation"
);
Subset {
universe: self.universe,
mask: self.mask.intersection(&rhs.mask),
}
}
}
impl<'a, A: SetElement> BitOr<&'a Subset<'a, A>> for &'a Subset<'a, A> {
type Output = Subset<'a, A>;
fn bitor(self, rhs: &'a Subset<'a, A>) -> Self::Output {
assert_eq!(
self.universe, rhs.universe,
"universes must be equal for & operation"
);
Subset {
universe: self.universe,
mask: self.mask.union(&rhs.mask),
}
}
}
impl<'a, A: SetElement> Not for &'a Subset<'a, A> {
type Output = Subset<'a, A>;
fn not(self) -> Self::Output {
Subset {
universe: self.universe,
mask: !&self.mask,
}
}
}
impl<'a, A: SetElement> PartialOrd for Subset<'a, A> {
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
if self.universe != other.universe {
None
} else {
match (
self.mask.subset_of(&other.mask),
other.mask.subset_of(&self.mask),
) {
(true, true) => Some(std::cmp::Ordering::Equal),
(false, true) => Some(std::cmp::Ordering::Greater),
(true, false) => Some(std::cmp::Ordering::Less),
(false, false) => None,
}
}
}
}
impl<'a, A: SetElement + Display> Display for Subset<'a, A> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
writeln!(
f,
"unverse: {{{}}}\nelements: {{{}}}",
self.universe.iter().map(|i| i.to_string()).join(", "),
self.mask
.values()
.map(|i| self.universe[i].to_string())
.join(", ")
)
}
}
#[derive(Debug, Eq, PartialEq)]
enum ElementKind {
Positive,
Negative,
Hole,
}
pub trait Conditionable: Eq + PartialEq + Debug + SetElement {
fn combine(&self, other: &Self) -> Self;
fn size(&self) -> usize;
}
impl Conditionable for Vec<Transition> {
fn combine(&self, other: &Self) -> Self {
self.iter().chain(other.iter()).cloned().collect()
}
fn size(&self) -> usize {
self.len()
}
}
impl Conditionable for HashSet<usize> {
fn combine(&self, other: &Self) -> Self {
self.union(&other).copied().collect()
}
fn size(&self) -> usize {
self.len()
}
}
impl<C: Conditionable> SetElement for C {}
pub struct HoleCondition<'a, E: Conditionable> {
universe: &'a [E],
pos: Subset<'a, E>,
neg: Subset<'a, E>,
hol: Subset<'a, E>,
}
impl<'a, E: Conditionable> HoleCondition<'a, E> {
/// Computes the maximal element of the hole condition, which is obtained by using
/// the size() method of the associated Conditionable type E.
pub fn max(&self) -> &E {
let mut it = self
.pos
.elements()
.chain(self.neg.elements())
.chain(self.hol.elements());
let fst = it.next().unwrap();
it.fold(fst, |acc, x| if acc.size() < x.size() { x } else { acc })
}
/// Obtains an iterator over all holes of a given size.
fn sized_holes(&self, size: usize) -> impl Iterator<Item = &E> + '_ {
self.hol.elements().filter(move |&e| e.size() == size)
}
fn sized_neg(&self, size: usize) -> Option<&E> {
self.neg.elements().find(|x| x.size() == size)
}
fn sized_pos(&self, size: usize) -> Option<&E> {
self.neg.elements().find(|x| x.size() == size)
}
/// Takes a _union-closed_ Hole Condition and determines it to a union-closed Muller
/// condition, which can in turn be used to compute a Parity condition.
/// The algorithm works as follows:
/// - we start with the maximal element M, which is by definition assumed to be determined
/// - all maximal undetermined subsets of M are then determined to have the same
/// type as M itself
/// - this procedure is iterated until no undetermined sets remain
pub fn determine(&mut self) {
// we need that the given condition is union-closed
assert!(self.union_closed());
// and that the maximal element is not a hole itself as oltherwise the resulting condition
// is not necessarily unique
// kind is true if and only if we need to look into pos
let kind = self.pos.contains(self.max());
let mut cur = self.max().clone();
let mut cur_size = self.max().size();
assert!(!self.hol.contains(&cur));
let mut add_pos = Vec::new();
let mut add_neg = Vec::new();
// while the current element is not minimal
while cur_size > 1 {
assert!(!self.hol.contains(&cur));
// we obtain all holes that have size one less than the current element
let holes = self.sized_holes(cur_size - 1).collect_vec();
if holes.len() > 0 {
// it this set is non-empty, we determine it according to cur's type
if kind {
for e in holes {
add_pos.push(e);
}
} else {
for e in holes {
add_neg.push(e);
}
}
cur_size -= 1;
}
// otherwise we simply obtain the successive element, as we know that both pos
// and neg must be union-closed, there must exist exactly one unique maximal
// set, allowing us to panic in case it is not found.
match (self.sized_neg(cur_size - 1), self.sized_pos(cur_size - 1)) {
(None, None) => panic!("the subset cannot be in neither of the three sets"),
(None, Some(set)) => cur = set,
(Some(set), None) => cur = set,
(Some(_), Some(_)) => {
panic!("the subset cannot be positive and negative at the same time!")
}
}
cur_size -= 1;
}
}
pub fn new(pos: Subset<'a, E>, neg: Subset<'a, E>, hol: Subset<'a, E>) -> HoleCondition<'a, E> {
HoleCondition {
universe: pos.universe,
pos,
neg,
hol,
}
}
pub fn incomplete(pos: Vec<&'a E>, neg: Vec<&'a E>) -> HoleCondition<'a, E> {
todo!() //pos.iter().chain(neg.iter()).flat_map(|e| e.)
}
pub fn union_closed(&self) -> bool {
// for (&a, &b) in self.pos.elements().cartesian_product(self.pos.elements()) {
// if !self.pos.contains(&&a.combine(b)) {
// return false;
// }
// }
// for (&a, &b) in self.neg.iter().cartesian_product(self.neg.iter()) {
// if !self.neg.contains(&&a.combine(b)) {
// return false;
// }
// }
// true
for p1 in self.pos.elements() {
for p2 in self.pos.elements() {
if !self.pos.contains(&p1.combine(p2)) {
println!(
"+ {:?} and + {:?} result in !+ {:?}",
p1,
p2,
&p1.combine(p2)
);
return false;
}
}
}
for p1 in self.neg.elements() {
for p2 in self.neg.elements() {
if !self.neg.contains(&p1.combine(p2)) {
println!(
"- {:?} and - {:?} result in !- {:?}",
p1,
p2,
&p1.combine(p2)
);
return false;
}
}
}
true
}
}
impl<'a, E: Conditionable> From<(&'a [E], HashMap<&'a E, ElementKind>)> for HoleCondition<'a, E> {
fn from(input: (&'a [E], HashMap<&'a E, ElementKind>)) -> Self {
let (universe, m) = input;
let (pos, neg, hol) = m.iter().fold(
(
Subset::empty(universe),
Subset::empty(universe),
Subset::empty(universe),
),
|(pos, neg, hol), (&e, kind)| {
let upd = |mut hs: Subset<'a, E>| {
hs.add(e);
hs
};
match kind {
ElementKind::Positive => (upd(pos), neg, hol),
ElementKind::Negative => (pos, upd(neg), hol),
ElementKind::Hole => (pos, neg, upd(hol)),
}
},
);
HoleCondition {
pos,
neg,
hol,
universe,
}
}
}
fn powerset<T: Clone>(s: Vec<T>) -> Vec<Vec<T>> {
(0..=s.len())
.map(|count| s.clone().into_iter().combinations(count))
.flatten()
.collect()
}
impl<'a, E: Conditionable> From<(&'a [E], Vec<E>, Vec<E>, Vec<E>)> for HoleCondition<'a, E> {
fn from(params: (&'a [E], Vec<E>, Vec<E>, Vec<E>)) -> Self {
let (univ, pos, neg, hol) = params;
HoleCondition {
universe: univ,
pos: Subset::new(univ, pos.as_slice()),
neg: Subset::new(univ, neg.as_slice()),
hol: Subset::new(univ, hol.as_slice()),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
impl Conditionable for usize {
fn combine(&self, other: &Self) -> Self {
if self % other == 0 {
*self
} else if other % self == 0 {
*other
} else {
self * other
}
}
fn size(&self) -> usize {
todo!()
}
}
impl Conditionable for Vec<usize> {
fn combine(&self, other: &Self) -> Self {
self.iter()
.chain(other.iter())
.cloned()
.collect::<HashSet<_>>()
.into_iter()
.collect_vec()
}
fn size(&self) -> usize {
self.len()
}
}
#[test]
fn closing_test() {
let univ = powerset((1usize..5).collect_vec());
let pos = vec![
vec![1, 2, 3, 4],
vec![1, 2, 4],
vec![1, 2],
vec![1, 4],
vec![2, 4],
];
let neg = vec![vec![1, 2, 3], vec![1, 3], vec![1]];
let hol = vec![
vec![1, 3, 4],
vec![2, 3, 4],
vec![2, 3],
vec![3, 4],
vec![2],
vec![4],
];
let hc = HoleCondition::from((univ.as_slice(), pos, neg, hol));
assert_eq!(vec![1, 2, 4], vec![1, 4, 2]);
assert!(hc.union_closed());
}
#[test]
fn from_hashmap_test() {
let univ = vec![2, 3, 5, 7, 14, 11, 4];
let input: HashMap<_, ElementKind> = vec![
(&2, ElementKind::Positive),
(&3, ElementKind::Negative),
(&5, ElementKind::Hole),
(&7, ElementKind::Positive),
(&14, ElementKind::Positive),
(&4, ElementKind::Hole),
]
.into_iter()
.collect();
let mut cond = HoleCondition::from((univ.as_slice(), input));
assert_eq!(cond.pos.size(), 3);
assert_eq!(cond.neg.size(), 1);
assert_eq!(cond.hol.size(), 2);
assert!(cond.union_closed());
cond.neg.add(&11);
assert!(!cond.union_closed());
}
#[test]
fn subset_plays() {
let universe = (0..10).into_iter().collect_vec();
let s1 = Subset::new(&universe, &vec![1, 3, 5]);
let s2 = Subset::new(&universe, &vec![1, 3, 5, 9]);
let s3 = Subset::new(&universe, &vec![4]);
let s4 = Subset::new(&universe, &vec![9]);
assert!(s1 < s2);
assert!(!(s1 < s3));
assert!(!(s3 < s2));
assert_eq!(&s1 | &s4, s2);
assert_eq!(&s3 | &(&s3 & &s2), s3);
}
}
use std::{
cmp::{max, min},
convert::TryFrom,
ops::{BitAnd, BitOr, Not},
};
use itertools::Itertools;
#[derive(Debug, Clone, Hash)]
pub struct BitSet {
bits: Vec<u64>,
nbits: usize,
}
fn population(u: u64) -> u64 {
const M1: u64 = 0x5555555555555555;
const M2: u64 = 0x3333333333333333;
const M4: u64 = 0x0f0f0f0f0f0f0f0f;
const H01: u64 = 0x0101010101010101;
let mut out = u - ((u >> 1) & M1);
out = (out & M2) + ((out >> 2) & M2);
out = (out + (out >> 4)) & M4;
out.wrapping_mul(H01) >> 56
}
impl PartialEq for BitSet {
fn eq(&self, other: &Self) -> bool {
if self.chunks() != other.chunks() {
return false;
}
(0..self.chunks())
.into_iter()
.all(|chk| self.bits[chk] == other.bits[chk])
}
}
impl Eq for BitSet {}
pub struct Values<'a> {
bs: &'a BitSet,
current: usize,
}
impl<'a> Iterator for Values<'a> {
type Item = usize;
fn next(&mut self) -> Option<Self::Item> {
while self.current < self.bs.nbits {
if !self.bs.is(self.current) {
self.current += 1;
} else {
let out = self.current;
self.current += 1;
return Some(out);
}
}
None
}
}
impl BitSet {
fn chunks(&self) -> usize {
self.nbits / 64 + min(1, self.nbits % 64)
}
/// Create a new bitset with the specified capacity, for now we do not allow resizing
/// of an existing bitset, thus the capacity (i.e. the number of possible bits to be set)
/// has to be provided at creation.
pub fn zeroes(capacity: usize) -> Self {
BitSet {
bits: vec![0u64]
.into_iter()
.cycle()
.take(capacity / 64 + min(1, capacity % 64))
.collect(),
nbits: capacity,
}
}
pub fn values(&self) -> Values {
Values {
bs: &self,
current: 0,
}
}
/// Create a new bitset with specified capacity, where each bit is set to one.
pub fn ones(capacity: usize) -> Self {
BitSet {
bits: vec![1u64]
.into_iter()
.cycle()
.take(capacity / 64 + min(1, capacity % 64))
.collect(),
nbits: capacity,
}
}
pub fn just(capacity: usize, pos: usize) -> Self {
let mut out = BitSet::zeroes(capacity);
out.set(pos);
out
}
/// Returns the capacity of the BitSet, i.e. the number of positions that can be set.
pub fn size(&self) -> usize {
self.nbits
}
/// Counts how many positions are set to one.
pub fn count(&self) -> usize {
(0..self.chunks())
.map(|id| population(self.bits[id]))
.fold(0, |sum, i| sum + (i as usize))
}
/// Checks whether a given position is set and returns true if and only if that is the case.
pub fn is(&self, id: usize) -> bool {
let chunk = self.bits[id / 64];
(chunk >> (id % 64)) & 0x01 == 0x01
}
/// Set the given position, regardless of its previous status.
pub fn set(&mut self, id: usize) {
if let Some(chunk) = self.bits.get_mut(id / 64) {
*chunk |= 0x01 << (id % 64);
}
}
/// Unsets the given position, regardless of its previous status. If an index that is
/// that is not part of the set is provided, this function is a NOOP.
pub fn unset(&mut self, id: usize) {
if let Some(chunk) = self.bits.get_mut(id / 64) {
*chunk &= !(0x01 << (id % 64));
}
}
fn apply<F: Fn(u64, u64) -> u64>(&self, other: &BitSet, f: F) -> BitSet {
assert!(self.chunks() >= other.chunks());
let mut out = BitSet::from(&self.bits);
for i in 0..other.chunks() {
out.bits[i] = f(out.bits[i], other.bits[i]);
}
out
}
/// Returns a BitSet that represents the union of self and other.
pub fn union(&self, other: &BitSet) -> BitSet {
let (l, r) = match self.size() >= other.size() {
true => (self, other),
false => (other, self),
};
l.apply(r, |x, y| x | y)
}
/// Returns a BitSet with exactly those positions set that are set in self and other.
pub fn intersection(&self, other: &BitSet) -> BitSet {
let (l, r) = match self.size() >= other.size() {
true => (self, other),
false => (other, self),
};
l.apply(r, |x, y| x & y)
}
/// Returs true if and only if self is a subset of other.
pub fn subset_of(&self, other: &BitSet) -> bool {
if self.chunks() > other.chunks() {
return false;
}
for i in 0..self.chunks() {
if self.bits[i] & other.bits[i] != self.bits[i] {
return false;
}
}
true
}
/// Checks whether self is a superset of other and returns true in that case.
pub fn superset_of(&self, other: &BitSet) -> bool {
other.subset_of(self)
}
/// Computes a vec containing all indices that are set to true.
pub fn indices(&self) -> Vec<usize> {
(0..self.nbits)
.into_iter()
.filter(|id| self.is(*id))
.collect()
}
}
impl From<&Vec<u64>> for BitSet {
fn from(input: &Vec<u64>) -> Self {
BitSet {
bits: input.to_vec(),
nbits: input.len() * 64,
}
}
}
impl<'a> Not for &'a BitSet {
type Output = BitSet;
fn not(self) -> Self::Output {
BitSet {
bits: self.bits.iter().map(|v| !(*v)).collect(),
nbits: self.nbits,
}
}
}
impl TryFrom<Vec<usize>> for BitSet {
type Error = &'static str;
fn try_from(value: Vec<usize>) -> Result<Self, Self::Error> {
let max = value.iter().max().ok_or("could not determine maximum")?;
let mut out = BitSet::zeroes(*max + 1);
for i in value {
out.set(i);
}
Ok(out)
}
}
impl Into<Vec<usize>> for BitSet {
fn into(self) -> Vec<usize> {
(0..self.nbits)
.into_iter()
.filter(|i| self.is(*i))
.collect()
}
}
impl Into<Vec<u64>> for BitSet {
fn into(self) -> Vec<u64> {
self.bits
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn conversions_test() {
let basis = vec![4, 8, 15, 16, 23, 42];
let bs = BitSet::from(&basis);
assert_eq!(Into::<Vec<u64>>::into(bs), basis);
}
#[test]
fn more_chunks_test() {
let mut bs = BitSet::zeroes(100);
bs.set(99);
assert!(bs.is(99));
}
#[test]
fn indices_test() {
let a = BitSet::from(&vec![107]);
assert_eq!(a.indices(), vec![0, 1, 3, 5, 6]);
let b = BitSet::try_from(vec![1, 5, 6, 7, 8, 100]).unwrap();
assert_eq!(b.indices(), vec![1, 5, 6, 7, 8, 100]);
}
#[test]
fn boolean_operations_test() {
// 107 represents 1101011 i.e. {0,1,3,5,6}
let a = BitSet::from(&vec![107]);
// 98 represents 1100010 i.e. {1,5,6}
let b = BitSet::from(&vec![98]);
// 20 is {2,4}
let c = BitSet::from(&vec![20]);
// 127 is {0,1,2,3,4,5,6}
let d = BitSet::from(&vec![127]);
// a n b = b and a u b = a
assert_eq!(a.intersection(&b), b);
assert_eq!(a.union(&b), a);
// a n c = 0 and a u c = {0,1,2,3,4,5,6} = d
assert_eq!(a.intersection(&c), BitSet::zeroes(7));
assert_eq!(a.union(&c), d);
}
#[test]
fn subset_test() {
// 107 represents 1101011 i.e. {0,1,3,5,6}
let b1 = BitSet::from(&vec![107]);
assert!(b1.is(0));
assert!(b1.is(1));
assert!(!b1.is(2));
assert!(b1.is(3));
assert!(!b1.is(4));
assert!(b1.is(5));
assert!(b1.is(6));
// similarly 40 is 101000 i.e. {3,5}
let b2 = BitSet::from(&vec![40]);
assert!(b2.subset_of(&b1));
assert!(!b1.subset_of(&b2));
}
#[test]
fn set_unset_test() {
let mut bs = BitSet::zeroes(100);
assert!(!bs.is(20));
bs.set(20);
assert!(bs.is(20));
assert_eq!(bs.count(), 1);
bs.unset(20);
assert!(!bs.is(20));
assert_eq!(bs.count(), 0);
}
}
use std::hash::Hash;
use std::{
collections::{HashMap, HashSet, VecDeque},
iter,
};
use std::{
fmt::{Debug, Display, Formatter, Result},
ops::Add,
};
use crate::automaton::ParityAcceptance::InfAtom;
use crate::relation::{BinaryRelation, Relation};
use crate::word::Word;
use crate::zielonka::{Loop, LoopType};
use itertools::Itertools;
use log::{debug, info};
use rand::{distributions::Alphanumeric, thread_rng, Rng};
use std::fs::File;
use std::io::Write;
use std::path::Path;
use std::process::Command;
pub trait Alphabet: Copy + Eq + PartialEq + Hash {}
impl Alphabet for char {}
pub trait StateData: Eq + Hash + Clone + Debug {}
impl StateData for Base {}
impl StateData for Class {}
impl StateData for usize {}
#[derive(Debug, Eq, PartialEq, Clone, Copy)]
pub struct Transition(Index, char, Index);
impl Display for Transition {
fn fmt(&self, f: &mut Formatter<'_>) -> Result {
write!(f, "({}, {}, {})", self.0, self.1, self.2)
}
}
enum ParityAcceptance {
InfAtom(usize),
FinAtom(usize),
Conjunction(Box<ParityAcceptance>, Box<ParityAcceptance>),
Disjunction(Box<ParityAcceptance>, Box<ParityAcceptance>),
}
impl Display for ParityAcceptance {
fn fmt(&self, f: &mut Formatter<'_>) -> Result {
match self {
ParityAcceptance::InfAtom(n) => write!(f, "Inf({})", n),
ParityAcceptance::FinAtom(n) => write!(f, "Fin({})", n),
ParityAcceptance::Conjunction(l, r) => {
write!(f, "({} & {})", l, r)
}
ParityAcceptance::Disjunction(l, r) => {
write!(f, "({} | {})", l, r)
}
}
}
}
impl From<usize> for ParityAcceptance {
fn from(size: usize) -> Self {
debug!("computing parityacceptance of size {}", size);
let mut out = InfAtom(0);
for i in 1..size {
match i % 2 == 0 {
false => {
out = ParityAcceptance::Disjunction(
Box::from(ParityAcceptance::FinAtom(i)),
Box::from(out),
)
}
true => {
out = ParityAcceptance::Conjunction(
Box::from(ParityAcceptance::InfAtom(i)),
Box::from(out),
)
}
}
}
out
}
}
#[derive(Eq, PartialEq, Clone, Hash)]
pub struct Base {
pub(crate) name: String,
}
impl Debug for Base {
fn fmt(&self, f: &mut Formatter<'_>) -> Result {
write!(f, "{}", self.to_string())
}
}
impl Display for Base {
fn fmt(&self, f: &mut Formatter<'_>) -> Result {
if self.name == "" {
write!(f, "ε")
} else {
write!(f, "{}", self.name)
}
}
}
impl Display for Class {
fn fmt(&self, f: &mut Formatter<'_>) -> Result {
write!(
f,
"[{}]",
self.children
.iter()
.map(|c| c.to_string())
.collect::<Vec<_>>()
.join(", ")
)
}
}
pub fn base<S: ToString>(from: S) -> Base {
Base {
name: from.to_string(),
}
}
#[derive(Debug, Clone, Eq, PartialEq, Hash)]
pub struct Class {
children: Vec<Base>,
}
impl Class {
pub fn named(name: &str) -> Class {
Class {
children: vec![base(name)],
}
}
pub fn dummy() -> Class {
let mut rng = thread_rng();
let basename = iter::repeat(())
.map(|()| rng.sample(Alphanumeric))
.take(4)
.collect::<String>();
Class {
children: vec![base(basename)],
}
}
}
#[derive(Debug)]
pub struct State<D> {
pub data: D,
}
pub type Index = usize;
pub enum RunResult {
Successful(Vec<Index>),
Leaving(Vec<Index>),
}
#[derive(Debug)]
pub struct DFA<S> {
pub(crate) initial: Index,
transitions: HashMap<(Index, char), Index>,
pub(crate) states: Vec<State<S>>,
alphabet: HashSet<char>,
}
pub struct Run<'a, I: Iterator<Item = char>, S> {
on: I,
current: Index,
automaton: &'a DFA<S>,
first: bool,
}
impl<'a, I: Iterator<Item = char>, S: StateData + Display> Iterator for Run<'a, I, S> {
type Item = Index;
fn next(&mut self) -> Option<Self::Item> {
if self.first {
self.first = false;
Some(self.current)
} else {
match self.on.next() {
None => None,
Some(chr) => match self.automaton.successor(self.current, chr) {
None => None,
Some(succ) => {
self.current = *succ;
Some(*succ)
}
},
}
}
}
}
impl<S: StateData + Display> DFA<S> {
pub fn new(alphabet: HashSet<char>) -> DFA<S> {
DFA {
initial: usize::max_value(),
transitions: HashMap::new(),
states: Vec::new(),
alphabet,
}
}
pub fn advance<I: Iterator<Item = char>>(&self, from: Index, by: I) -> Option<Index> {
match self.run(from, by) {
RunResult::Successful(run) => Some(*run.last().unwrap()),
RunResult::Leaving(_) => None,
}
}
pub fn elementary_loops(&self) -> HashSet<Loop> {
#[derive(Debug, Clone, Eq, PartialEq)]
enum Status {
Red, // not yet seen
Yellow, // seen and currently residing on the stack
Green, // already fully explored
}
let mut out = HashSet::new();
for state_id in 0..self.size() {
let mut status: Vec<Status> = iter::repeat(Status::Red).take(self.size()).collect_vec();
let mut stack = Vec::with_capacity(self.size());
let mut symbols: Vec<Vec<char>> = Vec::new();
stack.push(state_id);
symbols.push(vec![]);
while let Some(cur) = stack.pop() {
let symbols_to_here = symbols.pop().unwrap();
status[cur] = Status::Yellow;
for sym in &self.alphabet {
if let Some(succ) = self.successor(cur, *sym) {
let mut ns = symbols_to_here.clone();
ns.push(*sym);
match status[*succ] {
Status::Red => {
stack.push(*succ);
symbols.push(ns);
}
Status::Yellow => {
if *succ == state_id {
// println!("{} --{:?}--> {}", state_id, ns, state_id);
let mut seq = vec![];
let mut current = state_id;
for character in ns {
let next = *self.successor(current, character).unwrap();
seq.push((current, character, next));
current = next;
}
out.insert(Loop::new(seq.into_iter(), LoopType::Undetermined));
}
}
Status::Green => {}
}
}
}
}
}
out
}
pub fn loop_for(&self, w: &Word, kind: LoopType) -> Option<Loop> {
let it = w.chars().enumerate().skip(w.base.len());
let mut seen = HashSet::new();
let mut cyc = Vec::new();
let mut cur = self.advance(self.initial, w.chars().take(w.base.len()))?;
for (pos, chr) in it {
let next = (cur, w - pos);
cyc.push(next.clone());
if !seen.insert(next.clone()) {
let pos = cyc.iter().skip_while(|(a, b)| *a != cur || b != &next.1);
return Some(Loop::new(
pos.map(|(a, b)| {
(
*a,
b.chars().next().unwrap(),
*self.successor(*a, b.chars().next().unwrap()).unwrap(),
)
}),
kind,
));
}
if let Some(succ) = self.successor(cur, chr) {
cur = *succ;
} else {
return None;
}
}
unreachable!();
}
pub fn close(&self, mut out: HashSet<Loop>) -> HashSet<Loop> {
// We have computed the set of elementary (simple) loops, i.e. those that do not intersect
// themselves. For the set of _all_ cycles, we add the union of all pairs of loops until
// a fixpoint is reached
let mut added = true;
while added {
added = false;
let toadd: HashSet<Loop> = out
.iter()
.cartesian_product(out.iter())
// .filter(|(l1, l2)| *l1 % *l2)
.map(|(l1, l2)| l1.clone().add(l2.clone()))
.filter(|l| !out.iter().any(|ll| l.coincides(ll)))
.collect();
if !toadd.is_subset(&out) {
added = true;
out.extend(toadd.into_iter());
}
}
info!("out: {:?}", out);
let mut todrop = HashSet::new();
let indefinite = out
.iter()
.filter(|&l| l.kind == LoopType::Undetermined)
.collect_vec();
for i in indefinite {
if out
.iter()
.any(|l| l.kind != LoopType::Undetermined && i.coincides(l))
{
todrop.insert(i.clone());
}
}
out.difference(&todrop).cloned().collect()
}
pub fn run<I>(&self, from: Index, on: I) -> RunResult
where
I: IntoIterator<Item = char>,
{
let mut states = vec![from];
let mut cur = from;
for chr in on {
if self.has_successor(cur, chr) {
cur = *self.successor(cur, chr).unwrap();
states.push(cur);
} else {
return RunResult::Leaving(states);
}
}
RunResult::Successful(states)
}
fn run_iter<I: Iterator<Item = char>>(&self, from: Index, on: I) -> Run<I, S> {
Run {
on,
current: from,
automaton: &self,
first: true,
}
}
pub fn cycle(&self, on: &Word) -> Option<Vec<(Index, usize, char)>> {
let from = self.advance(self.initial, on.base.chars())?;
self.compute_cycle(from, on.ext.chars().collect())
}
fn reachable(&self) -> HashSet<Index> {
let mut queue = VecDeque::from(vec![self.initial]);
let mut out = HashSet::new();
while let Some(current) = queue.pop_front() {
if out.insert(current) {
for sym in &self.alphabet {
if let Some(succ) = self.successor(current, *sym) {
queue.push_back(*succ);
}
}
}
}
out
}
pub fn distinguishable(&self, left: &Word, right: &Word) -> bool {
if left == right {
return false;
}
if let Some(lcyc) = self.cycle(left) {
return if let Some(rcyc) = self.cycle(right) {
let left_transitions = lcyc.iter().map(|(q, _, c)| (q, c)).collect::<HashSet<_>>();
let right_transitions = rcyc.iter().map(|(q, _, c)| (q, c)).collect::<HashSet<_>>();
left_transitions != right_transitions
} else {
true
};
} else if self.cycle(right).is_some() {
return true;
}
match self.run(self.initial, left.chars()) {
RunResult::Successful(_) => match self.run(self.initial, right.chars()) {
RunResult::Successful(_) => self.cycle(left) != self.cycle(right),
RunResult::Leaving(_) => true,
},
RunResult::Leaving(seq) => match self.run(self.initial, right.chars()) {
RunResult::Successful(_) => true,
RunResult::Leaving(qes) => {
seq.last() != qes.last() || (left - seq.len()) != (right - qes.len())
}
},
}
}
pub fn induced_cycle(&self, word: &Word) -> Option<Vec<(Index, char)>> {
let it = word.chars().enumerate().skip(word.base.len());
let mut seen = HashSet::new();
let mut cyc = Vec::new();
let mut cur = self
.advance(self.initial, word.chars().take(word.base.len()))
.unwrap();
// convert to graph where only 'fitting' transitions are present
for (pos, chr) in it {
let next = (cur, word - pos);
cyc.push(next.clone());
if !seen.insert(next.clone()) {
// compute cycle from from cur on next.1
let pos = cyc
.iter()
.skip_while(|(a, b)| *a != cur && b != &next.1)
.collect_vec();
return Some(
pos.iter()
.map(|(a, b)| (*a, b.chars().next().unwrap()))
.collect_vec(),
);
}
let next = self.successor(cur, chr);
match next {
None => return None,
Some(c) => cur = *c,
}
}
unimplemented!()
}
fn compute_cycle(&self, from: Index, on: Vec<char>) -> Option<Vec<(Index, usize, char)>> {
let mut wrd_iter = on.iter().enumerate().cycle();
let mut cyc = Vec::new();
let mut cur = from;
loop {
if let Some((i, chr)) = wrd_iter.next() {
if cyc.contains(&(cur, i, *chr)) {
// now we have to remove the possible preceeding part that does not belong
// to the cycle
cyc.push((cur, i, *chr));
return Some(cyc);
}
cyc.push((cur, i, *chr));
let next = self.successor(cur, *chr);
match next {
None => return None,
Some(c) => cur = *c,
}
} else {
return None;
}
}
}
pub fn set_initial(&mut self, state: Index) {
assert!(self.states.len() > state);
self.initial = state
}
pub fn has_successor(&self, from: Index, sym: char) -> bool {
self.successor(from, sym).is_some()
}
pub fn output_tpa_hoa(&self, parities: &HashMap<(Index, char, Index), usize>) -> String {
assert!(!self.states.is_empty());
// assert_eq!(self.transitions.len(), parities.len());
let mut out = Vec::new();
// let colour_count = parities.values().collect::<HashSet<_>>().len();
let num_alph = self.alphabet.iter().sorted().enumerate().collect_vec();
let colour_count = *parities.values().into_iter().max().unwrap() + 1;
let highest_even = match (colour_count - 1) % 2 == 0 {
true => colour_count - 1,
false => colour_count - 2,
};
out.push(format!(
"HOA: v1\nStates: {}\nStart: {}",
self.size(),
self.initial
));
out.push(format!("acc-name: parity min even {}", colour_count));
out.push(format!(
"Acceptance: {} {}",
colour_count,
ParityAcceptance::from(colour_count)
));
out.push(format!(
"AP: {} {}",
self.alphabet.len(),
num_alph
.iter()
.map(|(_i, c)| format!("\"{}\"", c))
.join(" ")
));
out.push("--BODY--".to_string());
let sym_to_string = |i: usize| {
(0..self.alphabet.len())
.into_iter()
.map(|n| {
if n == i {
format!("{}", n)
} else {
format!("!{}", n)
}
})
.join(" & ")
};
for (id, state) in self.states.iter().enumerate() {
out.push(format!("State: {} \"{}\"", id, state.data));
for (symid, sym) in &num_alph {
if let Some(succid) = self.successor(id, **sym) {
out.push(format!(
"[{}] {} {{{}}}",
sym_to_string(*symid),
succid,
parities.get(&(id, **sym, *succid)).unwrap_or(&highest_even)
));
}
}
}
out.push("--END--".to_string());
out.join("\n")
}
pub fn output_dot(&self) -> String {
assert!(!self.states.is_empty());
let mut out = Vec::new();
out.push("digraph NFA {".to_string());
out.push("\t\"\" [shape = none]".to_string());
for state in &self.states {
out.push(format!("\t\"{}\" [shape = circle]", state.data));
}
out.push("".to_string());
out.push(format!(
"\t\"\" -> \"{}\"",
&self.states.get(self.initial).unwrap().data
));
for ((o, sym), v) in &self.transitions {
let origin = &self.states.get(*o).unwrap().data;
let target = &self.states.get(*v).unwrap().data;
out.push(format!(
"\t\"{}\" -> \"{}\" [label=\"{}\"]",
origin, target, sym
));
}
out.push("}".to_string());
out.join("\n")
}
pub fn add(&mut self, elem: S) -> Index {
let index = self.states.len();
self.states.push(State { data: elem });
index
}
pub fn successor(&self, state: Index, sym: char) -> Option<&Index> {
self.transitions.get(&(state, sym))
}
pub fn map(&mut self, from: &S, sym: char, to: &S) {
match self.index(from) {
None => panic!("node {:#?} not found", from),
Some(idxfrom) => match self.index(to) {
None => panic!("node {:#?} not found", to),
Some(idxto) => self.transitions.insert((idxfrom, sym), idxto),
},
};
}
pub fn transition(&mut self, from: Index, sym: char, to: Index) {
assert!(from < self.states.len() && to < self.states.len());
self.transitions.insert((from, sym), to);
}
pub fn index(&self, state: &S) -> Option<Index> {
for (idx, data) in self.states.iter().enumerate() {
if data.data == *state {
return Some(idx);
}
}
None
}
pub fn state(&self, id: Index) -> Option<&State<S>> {
self.states.get(id)
}
pub fn size(&self) -> usize {
self.states.len()
}
pub fn display(&self) {
let base_path = "/tmp/";
let mut rng = thread_rng();
let filename = iter::repeat(())
.map(|()| rng.sample(Alphanumeric))
.take(7)
.collect::<String>();
let full_path = format!("{}{}.dot", base_path, filename);
let mut out_file =
File::create(full_path.clone()).expect("could not create temporary file");
out_file
.write_all(self.output_dot().as_bytes())
.expect("could not write temporary file");
Command::new("sh")
.arg("-c")
.arg(format!("xdot {} && rm {}", full_path, full_path))
.spawn()
.expect("could not launch xdot");
}
pub fn write_to_file<T: AsRef<Path> + Clone + Display>(&self, filename: T) {
debug!("writing file {}", filename);
let mut file = File::create(filename).expect("cannot open file for writing out automaton");
file.write_all(self.output_dot().as_bytes())
.expect("cannot write automaton to file");
}
pub fn render<T: AsRef<Path> + Clone + Display>(&self, filename: T) {
self.write_to_file(format!("{}.dot", filename));
Command::new("sh")
.arg("-c")
.arg(format!(
"dot -Tpng {}.dot -o {}.tmp.png && rm {}.dot",
filename, filename, filename
))
.spawn()
.expect("could not render file!")
.wait()
.expect("dot did not run to completion :/");
}
pub fn new_extend_congruence<R: BinaryRelation<Index>>(
&self,
_cong: &R,
_left: Index,
_right: Index,
) -> R {
unimplemented!()
}
pub fn extend_congruence(
&self,
cong: &Relation<Index>,
left: Index,
right: Index,
) -> Relation<Index> {
let mut current = cong.transitive();
for (p1, p2) in cong
.class_for(left)
.iter()
.cartesian_product(cong.class_for(right).iter())
{
current.add(*p1, *p2);
}
loop {
let mut toadd = HashSet::new();
for sym in &self.alphabet {
current = current.transitive();
let succs = current
.pairs
.iter()
.filter(|(p1, p2)| {
self.has_successor(*p1, *sym) && self.has_successor(*p2, *sym)
})
.map(|(p1, p2)| {
(
*self.successor(*p1, *sym).unwrap(),
*self.successor(*p2, *sym).unwrap(),
)
})
.filter(|(p1, p2)| !current.contains((*p1, *p2)))
.collect_vec();
if !succs.is_empty() {
for (p1, p2) in succs {
for (q1, q2) in current
.class_for(p1)
.iter()
.cartesian_product(current.class_for(p2).iter())
{
toadd.insert((*q1, *q2));
}
}
for (p1, p2) in &toadd {
current.add(*p1, *p2);
}
}
}
if toadd.is_empty() {
break;
}
}
current
}
}
impl DFA<Class> {
pub fn transient_extend_for(&mut self, state: Index, words: &[Word]) {
for word in words {
let mut cur = state;
let mut remainder: Word = match !word.base.is_empty() {
true => word.clone(),
false => word << 1,
};
for _i in 0..remainder.base.len() {
cur = match self.successor(cur, remainder.head()) {
Some(&s) => s,
None => {
let out = self.add(Class::dummy());
self.transition(cur, remainder.head(), out);
out
}
};
remainder -= 1;
}
let start = cur;
for _i in 0..(remainder.ext.len() - 1) {
cur = match self.successor(cur, remainder.head()) {
Some(&s) => s,
None => {
let out = self.add(Class::dummy());
self.transition(cur, remainder.head(), out);
out
}
};
remainder -= 1;
}
self.transition(cur, remainder.head(), start);
// too!("der basis teil muss einfach abgerollt werden und der periodische teil muss eine schleife bilden!");
// let first_char = word.chars().next().unwrap();
// let start = match self.successor(state, first_char) {
// Some(s) => *s,
// None => {
// let out = self.add(Class::dummy());
// self.transition(state, first_char, out);
// out
// }
// };
// for i in
// // go along the bare symbols of the word, leaving the last one
// for sym in remainder
// .base
// .chars()
// .chain(remainder.ext.chars().take(remainder.ext.len() - 1))
// {
// let new = match self.has_successor(cur, sym) {
// false => self.add(Class::dummy()),
// true => *self.successor(cur, sym).unwrap(),
// };
// self.transition(cur, sym, new);
// cur = new;
// }
// // little hack so that words with periodic part of length one actually produce cycles
// if cur == state {
// cur = start;
// }
// // now we can insert an edge back to the first state
// self.transition(cur, remainder.ext.chars().last().unwrap(), start);
}
}
}
impl DFA<Base> {
pub fn new_quotient<R: BinaryRelation<Index>>(&self, rel: &R) -> DFA<Class> {
let mut out = DFA::new(self.alphabet.clone());
let mut mapping = HashMap::new();
let mut gnippam = HashMap::new();
let cls = rel.all_classes();
for c in &cls {
let (c1, c2) = self
.states
.iter()
.enumerate()
.filter(|(i, _e)| c.contains(i))
.tee();
let state_id = out.add(Class {
children: c1.map(|(_i, e)| e.data.clone()).collect(),
});
let (d1, d2) = c2.tee();
for (child, _) in d1 {
mapping.insert(child, state_id);
}
gnippam.insert(state_id, d2.map(|(i, _c)| i).collect::<Vec<_>>());
}
for (s1, c1) in &gnippam {
assert!(!c1.is_empty());
for sym in &self.alphabet {
// if !out.has_successor(*s1, *sym) {
for q1 in c1 {
if let Some(q2) = self.successor(*q1, *sym) {
// we have found a successor, now search the class containing it
let c2 = mapping.get(q2).unwrap();
out.transition(*s1, *sym, *c2);
}
}
// }
}
}
out.set_initial(*mapping.get(&self.initial).unwrap());
out
}
pub fn quotient(&self, rel: &Relation<Index>) -> DFA<Class> {
let mut out = DFA::new(self.alphabet.clone());
let mut mapping = HashMap::new();
let mut gnippam = HashMap::new();
let cls = rel.classes();
for c in &cls {
let (c1, c2) = self
.states
.iter()
.enumerate()
.filter(|(i, _e)| c.contains(i))
.tee();
let state_id = out.add(Class {
children: c1.map(|(_i, e)| e.data.clone()).collect(),
});
let (d1, d2) = c2.tee();
for (child, _) in d1 {
mapping.insert(child, state_id);
}
gnippam.insert(state_id, d2.map(|(i, _c)| i).collect::<Vec<_>>());
}
for (s1, c1) in &gnippam {
assert!(!c1.is_empty());
for sym in &self.alphabet {
// if !out.has_successor(*s1, *sym) {
for q1 in c1 {
if let Some(q2) = self.successor(*q1, *sym) {
// we have found a successor, now search the class containing it
let c2 = mapping.get(q2).unwrap();
out.transition(*s1, *sym, *c2);
}
}
// }
}
}
out.set_initial(*mapping.get(&self.initial).unwrap());
out
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::fs::File;
use std::io::Write;
#[test]
fn reachable_test() {
let mut aut = DFA::new(vec!['a', 'b', 'c'].into_iter().collect());
let a = aut.add(base(""));
let b = aut.add(base("A"));
let c = aut.add(base("B"));
aut.set_initial(a);
aut.transition(a, 'a', a);
aut.transition(a, 'b', b);
aut.transition(a, 'c', c);
aut.transition(b, 'a', a);
aut.transition(b, 'b', b);
aut.transition(b, 'c', c);
aut.transition(c, 'a', a);
aut.transition(c, 'b', b);
aut.transition(c, 'c', c);
assert_eq!(aut.reachable(), vec![0, 1, 2].into_iter().collect());
}
#[test]
fn quotient_test() {
let mut aut = DFA::new(vec!['a', 'b', 'c'].into_iter().collect());
let a = aut.add(base(""));
let b = aut.add(base("A"));
let c = aut.add(base("B"));
aut.set_initial(a);
aut.transition(a, 'a', a);
aut.transition(a, 'b', b);
aut.transition(a, 'c', c);
aut.transition(b, 'a', a);
aut.transition(b, 'b', b);
aut.transition(b, 'c', c);
aut.transition(c, 'a', a);
aut.transition(c, 'b', b);
aut.transition(c, 'c', c);
let mut r = Relation::new(vec![0, 1, 2].into_iter());
r.add(1usize, 2);
r.add(0, 0);
let rr = r.transitive();
let qaut = aut.quotient(&rr);
println!("{:#?}", qaut.output_dot());
let mut output_file =
File::create("/home/leon/Code/rspni/samples/test.dot").expect("could not open file");
output_file
.write_all(qaut.output_dot().as_bytes())
.expect("could not write output file");
}
#[test]
fn advance_test() {
let mut aut = DFA::new(vec!['a', 'b'].into_iter().collect());
let a = aut.add(0usize);
let _b = aut.add(1);
aut.set_initial(a);
aut.transition(0, 'a', 1);
aut.transition(0, 'b', 0);
aut.transition(1, 'a', 0);
aut.transition(1, 'b', 1);
assert_eq!(aut.advance(0, "aabab".chars()), Some(1));
assert_eq!(aut.advance(0, "bbabababbabc".chars()), None);
match aut.run(a, "abbabbabab".chars()) {
RunResult::Successful(run) => println!("{:?}", run),
RunResult::Leaving(_) => panic!("this must not happen!"),
}
let cyc = aut.induced_cycle(&Word::new("", "ab"));
println!("{:?}", cyc);
}
}
+,a
+b,a
-a,b
,b,1
a,b,1
aa,b,1
ba,ba,0
bba,b,0
+,b
+a,b
+aa,b
-ba,ba
-bba,b
# SUCCESSFUL ATTEMPT, WHY?
# COMMENTED OUT LINES ARE NOT NECESSARY FOR A MINIMAL SAMPLE
,a,1
,ba,0
,bba,0
,bbaa,1
#,bbaaa,1
#b,a,1
#,ba,0
#b,baa,1
#a,b,0
#,b,0
# FAILED ATTEMPT: PARITIZATION NOT POSSIBLE
#,a,true
#b,a,true
#,b,false
#bb,a,true
#,baa,true
#b,baa,true
#,ba,false
#
#a,bba,false
#,abb,false
,a,true
,aab,true
,aba,true
,ba,false
,ab,false
#1
bbb,a,true
#2
,b,false
bab,a,true
#3
,babb,false
bbab,a,true
#4
bbbbb,a,true
#,abb,false
#bab,a,true
#4
#,baa,true
#5
#,ab,false
,b,0
b,a,1
b,abb,0
b,aba,1
b,abba,1
bb,a,1
a,aab,1
#ba,aab,1
,b,0
b,a,1
b,abb,0
b,aba,1
b,abba,1
bb,a,1
#b,bab,0
#b,ba,0
#b,aab,1
#b,baa,1
#,a,1
#,a,1
#positive: [+ : {(0 -a-> 1)(0 -b-> 0)(1 -a-> 0)}, + : {(0 -a-> 1)(1 -a-> 0)}]
#negative: [- : {(0 -a-> 1)(0 -b-> 0)(1 -b-> 0)}, - : {(0 -b-> 0)}]
#indefinite: [? : {(0 -a-> 1)(0 -b-> 0)(1 -a-> 0)(1 -b-> 0)}, ? : {(0 -a-> 1)(1 -b-> 0)}, ? : {(0 -a-> 1)(1 -a-> 0)(1 -b-> 0)}]
# after adding (,abba)
#positive: [+ : {(0 -a-> 0)}, + : {(0 -a-> 0)(0 -b-> 1)(1 -a-> 0)(1 -b-> 0)}, + : {(0 -a-> 0)(0 -b-> 1)(1 -a-> 0)}, + : {(0 -a-> 0)(0 -b-> 1)(1 -b-> 0)}]
#negative: [- : {(0 -b-> 1)(1 -b-> 0)}, - : {(0 -a-> 0)(0 -b-> 1)(1 -b-> 0)}]
#indefinite: [? : {(0 -b-> 1)(1 -a-> 0)}, ? : {(0 -b-> 1)(1 -a-> 0)(1 -b-> 0)}, ? : {(0 -a-> 0)(0 -b-> 1)(1 -a-> 0)(1 -b-> 0)}]
,ac,1
,c,0
,a,0
,aac,1
,acc,0
,b,1
,c,1
,ab,1
,ba,1
,a,0
,ca,0
,ac,0
#,abba,1
#,abb,0
,a,0
,b,0
,ba,1
,ab,1
,aba,1
,bab,1
a,b,0
b,a,0
b,ba,1
#,a,1
#,a,1
,b,0
#,ab,1
,ba,1
#,bba,1
#,abb,1
b,a,1
,ba,0
#,a,1
#,b,0
#ab,a,1
#bbbb,a,1
#,b,0
#,ab,0
#!/usr/bin/bash
# for this to work you need cargo-watch, https installed from cargo
# as well as browser-sync globally via npm
RUSTDOCFLAGS="--html-in-header /home/leon/Code/rspni/basedoc.html" cargo watch -s 'cargo doc --no-deps && browser-sync start --ss target/doc -s target/doc --directory --no-open'
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css"
integrity="sha384-AfEj0r4/OFrOo5t7NnNe46zW/tFgW6x/bCJG8FqQCEo3+Aro6EYUG4+cU+KJWu/X" crossorigin="anonymous">
<!-- The loading of KaTeX is deferred to speed up page rendering -->
<script defer src="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.js"
integrity="sha384-g7c+Jr9ZivxKLnZTDUhnkOnsh30B4H0rpLUpJ4jAIKs4fnJI+sEnkvrMWph2EDg4"
crossorigin="anonymous"></script>
<!-- To automatically render math in text elements, include the auto-render extension: -->
<script defer src="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/contrib/auto-render.min.js"
integrity="sha384-mll67QQFJfxn0IYznZYonOWZ644AWYC+Pt2cHqMaRhXVrursRwvLnLaebdGIlYNa" crossorigin="anonymous"
onload="renderMathInElement(document.body);"></script>
<script>
document.addEventListener("DOMContentLoaded", function () {
renderMathInElement(document.body, {
delimiters: [
{ left: "$$", right: "$$", display: true },
{ left: "\\(", right: "\\)", display: false },
{ left: "$", right: "$", display: false },
{ left: "\\[", right: "\\]", display: true }
]
});
});
</script>
[package]
name = "rspni"
version = "0.1.0"
authors = ["León Bohn <leon.bohn@rwth-aachen.de>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
fern = "0.6.0"
log = "0.4.11"
chrono = "0.4.19"
itertools = "0.9.0"
clap = "3.0.0-beta.2"
#proptest = "0.10.1"
rand = "0.7.3"
csv = "1.1.5"
serde = { version = "1.0.118", features = ["derive"] }
[package.metadata.docs.rs]
rustdoc-args = [ "--html-in-header", "basedoc.html" ]
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
[[package]]
name = "atty"
version = "0.2.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d9b39be18770d11421cdb1b9947a45dd3f37e93092cbf377614828a319d5fee8"
dependencies = [
"hermit-abi",
"libc",
"winapi",
]
[[package]]
name = "autocfg"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cdb031dd78e28731d87d56cc8ffef4a8f36ca26c38fe2de700543e627f8a464a"
[[package]]
name = "bitflags"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf1de2fe8c75bc145a2f577add951f8134889b4795d47466a54a5c846d691693"
[[package]]
name = "bstr"
version = "0.2.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "473fc6b38233f9af7baa94fb5852dca389e3d95b8e21c8e3719301462c5d9faf"
dependencies = [
"lazy_static",
"memchr",
"regex-automata",
"serde",
]
[[package]]
name = "byteorder"
version = "1.3.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08c48aae112d48ed9f069b33538ea9e3e90aa263cfa3d1c24309612b1f7472de"
[[package]]
name = "cfg-if"
version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4785bdd1c96b2a846b2bd7cc02e86b6b3dbf14e7e53446c4f54c92a361040822"
[[package]]
name = "chrono"
version = "0.4.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "670ad68c9088c2a963aaa298cb369688cf3f9465ce5e2d4ca10e6e0098a1ce73"
dependencies = [
"libc",
"num-integer",
"num-traits",
"time",
"winapi",
]
[[package]]
name = "clap"
version = "3.0.0-beta.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4bd1061998a501ee7d4b6d449020df3266ca3124b941ec56cf2005c3779ca142"
dependencies = [
"atty",
"bitflags",
"clap_derive",
"indexmap",
"lazy_static",
"os_str_bytes",
"strsim",
"termcolor",
"textwrap",
"unicode-width",
"vec_map",
]
[[package]]
name = "clap_derive"
version = "3.0.0-beta.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "370f715b81112975b1b69db93e0b56ea4cd4e5002ac43b2da8474106a54096a1"
dependencies = [
"heck",
"proc-macro-error",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "csv"
version = "1.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f9d58633299b24b515ac72a3f869f8b91306a3cec616a602843a383acd6f9e97"
dependencies = [
"bstr",
"csv-core",
"itoa",
"ryu",
"serde",
]
[[package]]
name = "csv-core"
version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2b2466559f260f48ad25fe6317b3c8dac77b5bdb5763ac7d9d6103530663bc90"
dependencies = [
"memchr",
]
[[package]]
name = "either"
version = "1.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e78d4f1cc4ae33bbfc157ed5d5a5ef3bc29227303d595861deb238fcec4e9457"
[[package]]
name = "fern"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8c9a4820f0ccc8a7afd67c39a0f1a0f4b07ca1725164271a64939d7aeb9af065"
dependencies = [
"log",
]
[[package]]
name = "getrandom"
version = "0.1.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fc587bc0ec293155d5bfa6b9891ec18a1e330c234f896ea47fbada4cadbe47e6"
dependencies = [
"cfg-if",
"libc",
"wasi 0.9.0+wasi-snapshot-preview1",
]
[[package]]
name = "hashbrown"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d7afe4a420e3fe79967a00898cc1f4db7c8a49a9333a29f8a4bd76a253d5cd04"
[[package]]
name = "heck"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "20564e78d53d2bb135c343b3f47714a56af2061f1c928fdb541dc7b9fdd94205"
dependencies = [
"unicode-segmentation",
]
[[package]]
name = "hermit-abi"
version = "0.1.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5aca5565f760fb5b220e499d72710ed156fdb74e631659e99377d9ebfbd13ae8"
dependencies = [
"libc",
]
[[package]]
name = "indexmap"
version = "1.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4fb1fa934250de4de8aef298d81c729a7d33d8c239daa3a7575e6b92bfc7313b"
dependencies = [
"autocfg",
"hashbrown",
]
[[package]]
name = "itertools"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "284f18f85651fe11e8a991b2adb42cb078325c996ed026d994719efcfca1d54b"
dependencies = [
"either",
]
[[package]]
name = "itoa"
version = "0.4.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dc6f3ad7b9d11a0c00842ff8de1b60ee58661048eb8049ed33c73594f359d7e6"
[[package]]
name = "lazy_static"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
[[package]]
name = "libc"
version = "0.2.80"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4d58d1b70b004888f764dfbf6a26a3b0342a1632d33968e4a179d8011c760614"
[[package]]
name = "log"
version = "0.4.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4fabed175da42fed1fa0746b0ea71f412aa9d35e76e95e59b192c64b9dc2bf8b"
dependencies = [
"cfg-if",
]
[[package]]
name = "memchr"
version = "2.3.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0ee1c47aaa256ecabcaea351eae4a9b01ef39ed810004e298d2511ed284b1525"
[[package]]
name = "num-integer"
version = "0.1.44"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d2cc698a63b549a70bc047073d2949cce27cd1c7b0a4a862d08a8031bc2801db"
dependencies = [
"autocfg",
"num-traits",
]
[[package]]
name = "num-traits"
version = "0.2.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9a64b1ec5cda2586e284722486d802acf1f7dbdc623e2bfc57e65ca1cd099290"
dependencies = [
"autocfg",
]
[[package]]
name = "os_str_bytes"
version = "2.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "afb2e1c3ee07430c2cf76151675e583e0f19985fa6efae47d6848a3e2c824f85"
[[package]]
name = "ppv-lite86"
version = "0.2.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac74c624d6b2d21f425f752262f42188365d7b8ff1aff74c82e45136510a4857"
[[package]]
name = "proc-macro-error"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "da25490ff9892aab3fcf7c36f08cfb902dd3e71ca0f9f9517bea02a73a5ce38c"
dependencies = [
"proc-macro-error-attr",
"proc-macro2",
"quote",
"syn",
"version_check",
]
[[package]]
name = "proc-macro-error-attr"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a1be40180e52ecc98ad80b184934baf3d0d29f979574e439af5a55274b35f869"
dependencies = [
"proc-macro2",
"quote",
"version_check",
]
[[package]]
name = "proc-macro2"
version = "1.0.24"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e0704ee1a7e00d7bb417d0770ea303c1bccbabf0ef1667dae92b5967f5f8a71"
dependencies = [
"unicode-xid",
]
[[package]]
name = "quote"
version = "1.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aa563d17ecb180e500da1cfd2b028310ac758de548efdd203e18f283af693f37"
dependencies = [
"proc-macro2",
]
[[package]]
name = "rand"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6a6b1679d49b24bbfe0c803429aa1874472f50d9b363131f0e89fc356b544d03"
dependencies = [
"getrandom",
"libc",
"rand_chacha",
"rand_core",
"rand_hc",
]
[[package]]
name = "rand_chacha"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f4c8ed856279c9737206bf725bf36935d8666ead7aa69b52be55af369d193402"
dependencies = [
"ppv-lite86",
"rand_core",
]
[[package]]
name = "rand_core"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "90bde5296fc891b0cef12a6d03ddccc162ce7b2aff54160af9338f8d40df6d19"
dependencies = [
"getrandom",
]
[[package]]
name = "rand_hc"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ca3129af7b92a17112d59ad498c6f81eaf463253766b90396d39ea7a39d6613c"
dependencies = [
"rand_core",
]
[[package]]
name = "regex-automata"
version = "0.1.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ae1ded71d66a4a97f5e961fd0cb25a5f366a42a41570d16a763a69c092c26ae4"
dependencies = [
"byteorder",
]
[[package]]
name = "rspni"
version = "0.1.0"
dependencies = [
"chrono",
"clap",
"csv",
"fern",
"itertools",
"log",
"rand",
"serde",
]
[[package]]
name = "ryu"
version = "1.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "71d301d4193d031abdd79ff7e3dd721168a9572ef3fe51a1517aba235bd8f86e"
[[package]]
name = "serde"
version = "1.0.118"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06c64263859d87aa2eb554587e2d23183398d617427327cf2b3d0ed8c69e4800"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.118"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c84d3526699cd55261af4b941e4e725444df67aa4f9e6a3564f18030d12672df"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "strsim"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "73473c0e59e6d5812c5dfe2a064a6444949f089e20eec9a2e5506596494e4623"
[[package]]
name = "syn"
version = "1.0.51"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3b4f34193997d92804d359ed09953e25d5138df6bcc055a71bf68ee89fdf9223"
dependencies = [
"proc-macro2",
"quote",
"unicode-xid",
]
[[package]]
name = "termcolor"
version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2dfed899f0eb03f32ee8c6a0aabdb8a7949659e3466561fc0adf54e26d88c5f4"
dependencies = [
"winapi-util",
]
[[package]]
name = "textwrap"
version = "0.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "203008d98caf094106cfaba70acfed15e18ed3ddb7d94e49baec153a2b462789"
dependencies = [
"unicode-width",
]
[[package]]
name = "time"
version = "0.1.44"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6db9e6914ab8b1ae1c260a4ae7a49b6c5611b40328a735b21862567685e73255"
dependencies = [
"libc",
"wasi 0.10.0+wasi-snapshot-preview1",
"winapi",
]
[[package]]
name = "unicode-segmentation"
version = "1.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bb0d2e7be6ae3a5fa87eed5fb451aff96f2573d2694942e40543ae0bbe19c796"
[[package]]
name = "unicode-width"
version = "0.1.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9337591893a19b88d8d87f2cec1e73fad5cdfd10e5a6f349f498ad6ea2ffb1e3"
[[package]]
name = "unicode-xid"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f7fe0bb3479651439c9112f72b6c505038574c9fbb575ed1bf3b797fa39dd564"
[[package]]
name = "vec_map"
version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f1bddf1187be692e79c5ffeab891132dfb0f236ed36a43c7ed39f1165ee20191"
[[package]]
name = "version_check"
version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b5a972e5669d67ba988ce3dc826706fb0a8b01471c088cb0b6110b805cc36aed"
[[package]]
name = "wasi"
version = "0.9.0+wasi-snapshot-preview1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cccddf32554fecc6acb585f82a32a72e28b48f8c4c1883ddfeeeaa96f7d8e519"
[[package]]
name = "wasi"
version = "0.10.0+wasi-snapshot-preview1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1a143597ca7c7793eff794def352d41792a93c481eb1042423ff7ff72ba2c31f"
[[package]]
name = "winapi"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419"
dependencies = [
"winapi-i686-pc-windows-gnu",
"winapi-x86_64-pc-windows-gnu",
]
[[package]]
name = "winapi-i686-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
[[package]]
name = "winapi-util"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "70ec6ce85bb158151cae5e5c87f95a8e97d2c0c4b001223f33a334e3ce5de178"
dependencies = [
"winapi",
]
[[package]]
name = "winapi-x86_64-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"